Test Report: KVM_Linux_crio 18702

                    
                      7da1c16e9c0a3f17226e01717faf9df7d280508b:2024-04-21:34140
                    
                

Test fail (29/317)

Order failed test Duration
30 TestAddons/parallel/Ingress 152.52
32 TestAddons/parallel/MetricsServer 354.24
44 TestAddons/StoppedEnableDisable 154.29
163 TestMultiControlPlane/serial/StopSecondaryNode 142.07
165 TestMultiControlPlane/serial/RestartSecondaryNode 62.24
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 385.16
170 TestMultiControlPlane/serial/StopCluster 142.17
230 TestMultiNode/serial/RestartKeepsNodes 308.19
232 TestMultiNode/serial/StopMultiNode 141.68
239 TestPreload 301.1
247 TestKubernetesUpgrade 847.94
284 TestStartStop/group/old-k8s-version/serial/FirstStart 288.61
297 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.09
300 TestStartStop/group/no-preload/serial/Stop 139
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 96.23
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
309 TestStartStop/group/old-k8s-version/serial/SecondStart 734.47
325 TestStartStop/group/embed-certs/serial/Stop 139.01
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.47
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
329 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.63
330 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.75
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 338.5
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 317.44
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.47
334 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 137.65
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 405.02
x
+
TestAddons/parallel/Ingress (152.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-337450 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-337450 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-337450 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a9b06fa8-4264-4ab2-90bd-364379ca3429] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a9b06fa8-4264-4ab2-90bd-364379ca3429] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004349159s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-337450 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.522559481s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-337450 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.51
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-337450 addons disable ingress --alsologtostderr -v=1: (7.783308115s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-337450 -n addons-337450
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-337450 logs -n 25: (1.475746126s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-287232 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | -p download-only-287232                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| delete  | -p download-only-287232                                                                     | download-only-287232 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| delete  | -p download-only-916770                                                                     | download-only-916770 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| delete  | -p download-only-287232                                                                     | download-only-287232 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-997979 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | binary-mirror-997979                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34105                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-997979                                                                     | binary-mirror-997979 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| addons  | enable dashboard -p                                                                         | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-337450 --wait=true                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:26 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ip      | addons-337450 ip                                                                            | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-337450 ssh curl -s                                                                   | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-337450 ssh cat                                                                       | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | /opt/local-path-provisioner/pvc-17b0f281-1dfd-4035-a69d-f977b9bf0dd8_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-337450 addons                                                                        | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-337450 addons                                                                        | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | -p addons-337450                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | -p addons-337450                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-337450 ip                                                                            | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:28 UTC | 21 Apr 24 18:28 UTC |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:28 UTC | 21 Apr 24 18:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:28 UTC | 21 Apr 24 18:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:22:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:22:35.227011   12353 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:22:35.227110   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:22:35.227117   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:22:35.227136   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:22:35.227321   12353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:22:35.227937   12353 out.go:298] Setting JSON to false
	I0421 18:22:35.228773   12353 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":253,"bootTime":1713723502,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:22:35.228837   12353 start.go:139] virtualization: kvm guest
	I0421 18:22:35.231091   12353 out.go:177] * [addons-337450] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:22:35.232448   12353 notify.go:220] Checking for updates...
	I0421 18:22:35.232456   12353 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:22:35.233745   12353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:22:35.235090   12353 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:22:35.236482   12353 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:22:35.238011   12353 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:22:35.239735   12353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:22:35.241366   12353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:22:35.274999   12353 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 18:22:35.276383   12353 start.go:297] selected driver: kvm2
	I0421 18:22:35.276402   12353 start.go:901] validating driver "kvm2" against <nil>
	I0421 18:22:35.276418   12353 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:22:35.277169   12353 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:22:35.277271   12353 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:22:35.292389   12353 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:22:35.292454   12353 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:22:35.292671   12353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:22:35.292747   12353 cni.go:84] Creating CNI manager for ""
	I0421 18:22:35.292763   12353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:22:35.292774   12353 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 18:22:35.292845   12353 start.go:340] cluster config:
	{Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:22:35.292950   12353 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:22:35.294718   12353 out.go:177] * Starting "addons-337450" primary control-plane node in "addons-337450" cluster
	I0421 18:22:35.295942   12353 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:22:35.295985   12353 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:22:35.295999   12353 cache.go:56] Caching tarball of preloaded images
	I0421 18:22:35.296086   12353 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:22:35.296097   12353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:22:35.296420   12353 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/config.json ...
	I0421 18:22:35.296451   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/config.json: {Name:mke0896c50ea6ceabbcecb759314a92bd3d3edbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:22:35.296607   12353 start.go:360] acquireMachinesLock for addons-337450: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:22:35.296669   12353 start.go:364] duration metric: took 45.954µs to acquireMachinesLock for "addons-337450"
	I0421 18:22:35.296692   12353 start.go:93] Provisioning new machine with config: &{Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:addons-337450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:22:35.296763   12353 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 18:22:35.298476   12353 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0421 18:22:35.298633   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:22:35.298679   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:22:35.312953   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0421 18:22:35.313381   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:22:35.313930   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:22:35.313954   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:22:35.314294   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:22:35.314500   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:22:35.314636   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:22:35.314799   12353 start.go:159] libmachine.API.Create for "addons-337450" (driver="kvm2")
	I0421 18:22:35.314830   12353 client.go:168] LocalClient.Create starting
	I0421 18:22:35.314867   12353 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:22:35.352695   12353 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:22:35.587455   12353 main.go:141] libmachine: Running pre-create checks...
	I0421 18:22:35.587482   12353 main.go:141] libmachine: (addons-337450) Calling .PreCreateCheck
	I0421 18:22:35.588015   12353 main.go:141] libmachine: (addons-337450) Calling .GetConfigRaw
	I0421 18:22:35.588420   12353 main.go:141] libmachine: Creating machine...
	I0421 18:22:35.588434   12353 main.go:141] libmachine: (addons-337450) Calling .Create
	I0421 18:22:35.588590   12353 main.go:141] libmachine: (addons-337450) Creating KVM machine...
	I0421 18:22:35.589817   12353 main.go:141] libmachine: (addons-337450) DBG | found existing default KVM network
	I0421 18:22:35.590577   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.590431   12375 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0421 18:22:35.590598   12353 main.go:141] libmachine: (addons-337450) DBG | created network xml: 
	I0421 18:22:35.590611   12353 main.go:141] libmachine: (addons-337450) DBG | <network>
	I0421 18:22:35.590616   12353 main.go:141] libmachine: (addons-337450) DBG |   <name>mk-addons-337450</name>
	I0421 18:22:35.590624   12353 main.go:141] libmachine: (addons-337450) DBG |   <dns enable='no'/>
	I0421 18:22:35.590631   12353 main.go:141] libmachine: (addons-337450) DBG |   
	I0421 18:22:35.590641   12353 main.go:141] libmachine: (addons-337450) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0421 18:22:35.590652   12353 main.go:141] libmachine: (addons-337450) DBG |     <dhcp>
	I0421 18:22:35.590661   12353 main.go:141] libmachine: (addons-337450) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0421 18:22:35.590675   12353 main.go:141] libmachine: (addons-337450) DBG |     </dhcp>
	I0421 18:22:35.590705   12353 main.go:141] libmachine: (addons-337450) DBG |   </ip>
	I0421 18:22:35.590751   12353 main.go:141] libmachine: (addons-337450) DBG |   
	I0421 18:22:35.590768   12353 main.go:141] libmachine: (addons-337450) DBG | </network>
	I0421 18:22:35.590779   12353 main.go:141] libmachine: (addons-337450) DBG | 
	I0421 18:22:35.595996   12353 main.go:141] libmachine: (addons-337450) DBG | trying to create private KVM network mk-addons-337450 192.168.39.0/24...
	I0421 18:22:35.660360   12353 main.go:141] libmachine: (addons-337450) DBG | private KVM network mk-addons-337450 192.168.39.0/24 created
	I0421 18:22:35.660404   12353 main.go:141] libmachine: (addons-337450) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450 ...
	I0421 18:22:35.660432   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.660322   12375 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:22:35.660452   12353 main.go:141] libmachine: (addons-337450) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:22:35.660473   12353 main.go:141] libmachine: (addons-337450) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:22:35.908314   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.908177   12375 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa...
	I0421 18:22:35.969413   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.969294   12375 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/addons-337450.rawdisk...
	I0421 18:22:35.969451   12353 main.go:141] libmachine: (addons-337450) DBG | Writing magic tar header
	I0421 18:22:35.969466   12353 main.go:141] libmachine: (addons-337450) DBG | Writing SSH key tar header
	I0421 18:22:35.969475   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.969417   12375 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450 ...
	I0421 18:22:35.969532   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450
	I0421 18:22:35.969558   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450 (perms=drwx------)
	I0421 18:22:35.969573   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:22:35.969585   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:22:35.969600   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:22:35.969609   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:22:35.969618   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:22:35.969625   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:22:35.969639   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:22:35.969652   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:22:35.969661   12353 main.go:141] libmachine: (addons-337450) Creating domain...
	I0421 18:22:35.969670   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:22:35.969683   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:22:35.969694   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home
	I0421 18:22:35.969702   12353 main.go:141] libmachine: (addons-337450) DBG | Skipping /home - not owner
	I0421 18:22:35.970755   12353 main.go:141] libmachine: (addons-337450) define libvirt domain using xml: 
	I0421 18:22:35.970780   12353 main.go:141] libmachine: (addons-337450) <domain type='kvm'>
	I0421 18:22:35.970808   12353 main.go:141] libmachine: (addons-337450)   <name>addons-337450</name>
	I0421 18:22:35.970819   12353 main.go:141] libmachine: (addons-337450)   <memory unit='MiB'>4000</memory>
	I0421 18:22:35.970828   12353 main.go:141] libmachine: (addons-337450)   <vcpu>2</vcpu>
	I0421 18:22:35.970843   12353 main.go:141] libmachine: (addons-337450)   <features>
	I0421 18:22:35.970852   12353 main.go:141] libmachine: (addons-337450)     <acpi/>
	I0421 18:22:35.970859   12353 main.go:141] libmachine: (addons-337450)     <apic/>
	I0421 18:22:35.970867   12353 main.go:141] libmachine: (addons-337450)     <pae/>
	I0421 18:22:35.970880   12353 main.go:141] libmachine: (addons-337450)     
	I0421 18:22:35.970892   12353 main.go:141] libmachine: (addons-337450)   </features>
	I0421 18:22:35.970902   12353 main.go:141] libmachine: (addons-337450)   <cpu mode='host-passthrough'>
	I0421 18:22:35.970913   12353 main.go:141] libmachine: (addons-337450)   
	I0421 18:22:35.970924   12353 main.go:141] libmachine: (addons-337450)   </cpu>
	I0421 18:22:35.970937   12353 main.go:141] libmachine: (addons-337450)   <os>
	I0421 18:22:35.970946   12353 main.go:141] libmachine: (addons-337450)     <type>hvm</type>
	I0421 18:22:35.970976   12353 main.go:141] libmachine: (addons-337450)     <boot dev='cdrom'/>
	I0421 18:22:35.970993   12353 main.go:141] libmachine: (addons-337450)     <boot dev='hd'/>
	I0421 18:22:35.971003   12353 main.go:141] libmachine: (addons-337450)     <bootmenu enable='no'/>
	I0421 18:22:35.971018   12353 main.go:141] libmachine: (addons-337450)   </os>
	I0421 18:22:35.971033   12353 main.go:141] libmachine: (addons-337450)   <devices>
	I0421 18:22:35.971045   12353 main.go:141] libmachine: (addons-337450)     <disk type='file' device='cdrom'>
	I0421 18:22:35.971061   12353 main.go:141] libmachine: (addons-337450)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/boot2docker.iso'/>
	I0421 18:22:35.971072   12353 main.go:141] libmachine: (addons-337450)       <target dev='hdc' bus='scsi'/>
	I0421 18:22:35.971096   12353 main.go:141] libmachine: (addons-337450)       <readonly/>
	I0421 18:22:35.971114   12353 main.go:141] libmachine: (addons-337450)     </disk>
	I0421 18:22:35.971123   12353 main.go:141] libmachine: (addons-337450)     <disk type='file' device='disk'>
	I0421 18:22:35.971134   12353 main.go:141] libmachine: (addons-337450)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:22:35.971155   12353 main.go:141] libmachine: (addons-337450)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/addons-337450.rawdisk'/>
	I0421 18:22:35.971163   12353 main.go:141] libmachine: (addons-337450)       <target dev='hda' bus='virtio'/>
	I0421 18:22:35.971169   12353 main.go:141] libmachine: (addons-337450)     </disk>
	I0421 18:22:35.971176   12353 main.go:141] libmachine: (addons-337450)     <interface type='network'>
	I0421 18:22:35.971182   12353 main.go:141] libmachine: (addons-337450)       <source network='mk-addons-337450'/>
	I0421 18:22:35.971190   12353 main.go:141] libmachine: (addons-337450)       <model type='virtio'/>
	I0421 18:22:35.971195   12353 main.go:141] libmachine: (addons-337450)     </interface>
	I0421 18:22:35.971203   12353 main.go:141] libmachine: (addons-337450)     <interface type='network'>
	I0421 18:22:35.971209   12353 main.go:141] libmachine: (addons-337450)       <source network='default'/>
	I0421 18:22:35.971213   12353 main.go:141] libmachine: (addons-337450)       <model type='virtio'/>
	I0421 18:22:35.971226   12353 main.go:141] libmachine: (addons-337450)     </interface>
	I0421 18:22:35.971239   12353 main.go:141] libmachine: (addons-337450)     <serial type='pty'>
	I0421 18:22:35.971252   12353 main.go:141] libmachine: (addons-337450)       <target port='0'/>
	I0421 18:22:35.971263   12353 main.go:141] libmachine: (addons-337450)     </serial>
	I0421 18:22:35.971276   12353 main.go:141] libmachine: (addons-337450)     <console type='pty'>
	I0421 18:22:35.971295   12353 main.go:141] libmachine: (addons-337450)       <target type='serial' port='0'/>
	I0421 18:22:35.971306   12353 main.go:141] libmachine: (addons-337450)     </console>
	I0421 18:22:35.971314   12353 main.go:141] libmachine: (addons-337450)     <rng model='virtio'>
	I0421 18:22:35.971325   12353 main.go:141] libmachine: (addons-337450)       <backend model='random'>/dev/random</backend>
	I0421 18:22:35.971336   12353 main.go:141] libmachine: (addons-337450)     </rng>
	I0421 18:22:35.971349   12353 main.go:141] libmachine: (addons-337450)     
	I0421 18:22:35.971367   12353 main.go:141] libmachine: (addons-337450)     
	I0421 18:22:35.971450   12353 main.go:141] libmachine: (addons-337450)   </devices>
	I0421 18:22:35.971468   12353 main.go:141] libmachine: (addons-337450) </domain>
	I0421 18:22:35.971482   12353 main.go:141] libmachine: (addons-337450) 
	I0421 18:22:35.977957   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:4c:66:bb in network default
	I0421 18:22:35.978470   12353 main.go:141] libmachine: (addons-337450) Ensuring networks are active...
	I0421 18:22:35.978495   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:35.979065   12353 main.go:141] libmachine: (addons-337450) Ensuring network default is active
	I0421 18:22:35.979340   12353 main.go:141] libmachine: (addons-337450) Ensuring network mk-addons-337450 is active
	I0421 18:22:35.979887   12353 main.go:141] libmachine: (addons-337450) Getting domain xml...
	I0421 18:22:35.980491   12353 main.go:141] libmachine: (addons-337450) Creating domain...
	I0421 18:22:37.332521   12353 main.go:141] libmachine: (addons-337450) Waiting to get IP...
	I0421 18:22:37.333211   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:37.333642   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:37.333676   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:37.333622   12375 retry.go:31] will retry after 290.403397ms: waiting for machine to come up
	I0421 18:22:37.625299   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:37.625693   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:37.625744   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:37.625686   12375 retry.go:31] will retry after 302.232672ms: waiting for machine to come up
	I0421 18:22:37.929187   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:37.929647   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:37.929672   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:37.929592   12375 retry.go:31] will retry after 463.355197ms: waiting for machine to come up
	I0421 18:22:38.394034   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:38.394435   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:38.394460   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:38.394403   12375 retry.go:31] will retry after 526.97784ms: waiting for machine to come up
	I0421 18:22:38.922949   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:38.923405   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:38.923458   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:38.923367   12375 retry.go:31] will retry after 603.499708ms: waiting for machine to come up
	I0421 18:22:39.528321   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:39.528749   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:39.528781   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:39.528690   12375 retry.go:31] will retry after 632.935544ms: waiting for machine to come up
	I0421 18:22:40.163453   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:40.163890   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:40.163918   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:40.163837   12375 retry.go:31] will retry after 901.774974ms: waiting for machine to come up
	I0421 18:22:41.067580   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:41.067967   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:41.067997   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:41.067909   12375 retry.go:31] will retry after 1.413543626s: waiting for machine to come up
	I0421 18:22:42.483305   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:42.483709   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:42.483731   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:42.483675   12375 retry.go:31] will retry after 1.750079619s: waiting for machine to come up
	I0421 18:22:44.236604   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:44.237041   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:44.237064   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:44.236973   12375 retry.go:31] will retry after 1.402403396s: waiting for machine to come up
	I0421 18:22:45.641454   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:45.641830   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:45.641862   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:45.641765   12375 retry.go:31] will retry after 2.357370138s: waiting for machine to come up
	I0421 18:22:48.002442   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:48.002965   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:48.002986   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:48.002922   12375 retry.go:31] will retry after 3.525566649s: waiting for machine to come up
	I0421 18:22:51.530143   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:51.530573   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:51.530629   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:51.530587   12375 retry.go:31] will retry after 4.023576525s: waiting for machine to come up
	I0421 18:22:55.555680   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:55.556097   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:55.556141   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:55.556090   12375 retry.go:31] will retry after 5.658995234s: waiting for machine to come up
	I0421 18:23:01.216683   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.217149   12353 main.go:141] libmachine: (addons-337450) Found IP for machine: 192.168.39.51
	I0421 18:23:01.217170   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has current primary IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.217176   12353 main.go:141] libmachine: (addons-337450) Reserving static IP address...
	I0421 18:23:01.217556   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find host DHCP lease matching {name: "addons-337450", mac: "52:54:00:b4:47:66", ip: "192.168.39.51"} in network mk-addons-337450
	I0421 18:23:01.288523   12353 main.go:141] libmachine: (addons-337450) DBG | Getting to WaitForSSH function...
	I0421 18:23:01.288554   12353 main.go:141] libmachine: (addons-337450) Reserved static IP address: 192.168.39.51
	I0421 18:23:01.288567   12353 main.go:141] libmachine: (addons-337450) Waiting for SSH to be available...
	I0421 18:23:01.291326   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.291644   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.291677   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.291970   12353 main.go:141] libmachine: (addons-337450) DBG | Using SSH client type: external
	I0421 18:23:01.292001   12353 main.go:141] libmachine: (addons-337450) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa (-rw-------)
	I0421 18:23:01.292035   12353 main.go:141] libmachine: (addons-337450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:23:01.292051   12353 main.go:141] libmachine: (addons-337450) DBG | About to run SSH command:
	I0421 18:23:01.292064   12353 main.go:141] libmachine: (addons-337450) DBG | exit 0
	I0421 18:23:01.422675   12353 main.go:141] libmachine: (addons-337450) DBG | SSH cmd err, output: <nil>: 
	I0421 18:23:01.422979   12353 main.go:141] libmachine: (addons-337450) KVM machine creation complete!
	I0421 18:23:01.423357   12353 main.go:141] libmachine: (addons-337450) Calling .GetConfigRaw
	I0421 18:23:01.423911   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:01.424103   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:01.424277   12353 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:23:01.424291   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:01.425451   12353 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:23:01.425467   12353 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:23:01.425476   12353 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:23:01.425486   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.427889   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.428246   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.428278   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.428413   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.428595   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.428767   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.428920   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.429064   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.429310   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.429328   12353 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:23:01.529757   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:23:01.529783   12353 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:23:01.529796   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.532350   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.532655   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.532685   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.532799   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.532961   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.533104   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.533220   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.533396   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.533551   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.533562   12353 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:23:01.635682   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:23:01.635755   12353 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:23:01.635764   12353 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:23:01.635780   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:23:01.636039   12353 buildroot.go:166] provisioning hostname "addons-337450"
	I0421 18:23:01.636063   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:23:01.636237   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.638757   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.639142   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.639166   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.639263   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.639428   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.639577   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.639685   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.639832   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.640036   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.640050   12353 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-337450 && echo "addons-337450" | sudo tee /etc/hostname
	I0421 18:23:01.756328   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-337450
	
	I0421 18:23:01.756361   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.759061   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.759386   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.759418   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.759556   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.759739   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.759896   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.760042   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.760168   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.760325   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.760340   12353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-337450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-337450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-337450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:23:01.873319   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:23:01.873351   12353 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:23:01.873393   12353 buildroot.go:174] setting up certificates
	I0421 18:23:01.873403   12353 provision.go:84] configureAuth start
	I0421 18:23:01.873415   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:23:01.873702   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:01.876424   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.876764   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.876784   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.876953   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.878885   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.879186   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.879219   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.879329   12353 provision.go:143] copyHostCerts
	I0421 18:23:01.879407   12353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:23:01.879549   12353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:23:01.879641   12353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:23:01.879743   12353 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.addons-337450 san=[127.0.0.1 192.168.39.51 addons-337450 localhost minikube]
	I0421 18:23:02.000631   12353 provision.go:177] copyRemoteCerts
	I0421 18:23:02.000699   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:23:02.000734   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.003339   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.003610   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.003638   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.003778   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.003981   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.004160   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.004298   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.085503   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:23:02.113780   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 18:23:02.141526   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 18:23:02.167514   12353 provision.go:87] duration metric: took 294.100021ms to configureAuth
	I0421 18:23:02.167539   12353 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:23:02.167747   12353 config.go:182] Loaded profile config "addons-337450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:23:02.167835   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.170334   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.170821   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.170857   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.171029   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.171236   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.171433   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.171649   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.171852   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:02.172056   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:02.172072   12353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:23:02.471217   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:23:02.471239   12353 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:23:02.471246   12353 main.go:141] libmachine: (addons-337450) Calling .GetURL
	I0421 18:23:02.472521   12353 main.go:141] libmachine: (addons-337450) DBG | Using libvirt version 6000000
	I0421 18:23:02.475007   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.475422   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.475451   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.475637   12353 main.go:141] libmachine: Docker is up and running!
	I0421 18:23:02.475652   12353 main.go:141] libmachine: Reticulating splines...
	I0421 18:23:02.475658   12353 client.go:171] duration metric: took 27.160821013s to LocalClient.Create
	I0421 18:23:02.475679   12353 start.go:167] duration metric: took 27.160882242s to libmachine.API.Create "addons-337450"
	I0421 18:23:02.475697   12353 start.go:293] postStartSetup for "addons-337450" (driver="kvm2")
	I0421 18:23:02.475710   12353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:23:02.475726   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.475998   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:23:02.476020   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.478437   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.478934   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.478960   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.479100   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.479296   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.479469   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.479707   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.562410   12353 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:23:02.567270   12353 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:23:02.567294   12353 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:23:02.567373   12353 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:23:02.567405   12353 start.go:296] duration metric: took 91.700109ms for postStartSetup
	I0421 18:23:02.567437   12353 main.go:141] libmachine: (addons-337450) Calling .GetConfigRaw
	I0421 18:23:02.567976   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:02.570924   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.571584   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.571609   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.571880   12353 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/config.json ...
	I0421 18:23:02.572068   12353 start.go:128] duration metric: took 27.275295251s to createHost
	I0421 18:23:02.572093   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.574438   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.574829   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.574859   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.574995   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.575184   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.575332   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.575472   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.575643   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:02.575820   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:02.575830   12353 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:23:02.675669   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713723782.649027493
	
	I0421 18:23:02.675692   12353 fix.go:216] guest clock: 1713723782.649027493
	I0421 18:23:02.675700   12353 fix.go:229] Guest: 2024-04-21 18:23:02.649027493 +0000 UTC Remote: 2024-04-21 18:23:02.572081139 +0000 UTC m=+27.390275697 (delta=76.946354ms)
	I0421 18:23:02.675735   12353 fix.go:200] guest clock delta is within tolerance: 76.946354ms
	I0421 18:23:02.675740   12353 start.go:83] releasing machines lock for "addons-337450", held for 27.379060586s
	I0421 18:23:02.675758   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.675995   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:02.678400   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.678723   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.678747   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.678940   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.679396   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.679558   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.679638   12353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:23:02.679682   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.679745   12353 ssh_runner.go:195] Run: cat /version.json
	I0421 18:23:02.679769   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.682106   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682328   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682451   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.682477   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682574   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.682742   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.682767   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.682774   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682886   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.682940   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.683027   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.683085   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.683147   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.683249   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.817958   12353 ssh_runner.go:195] Run: systemctl --version
	I0421 18:23:02.824467   12353 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:23:02.989445   12353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:23:02.997276   12353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:23:02.997349   12353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:23:03.014851   12353 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:23:03.014878   12353 start.go:494] detecting cgroup driver to use...
	I0421 18:23:03.014947   12353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:23:03.031066   12353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:23:03.045566   12353 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:23:03.045618   12353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:23:03.059952   12353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:23:03.074163   12353 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:23:03.191599   12353 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:23:03.336464   12353 docker.go:233] disabling docker service ...
	I0421 18:23:03.336548   12353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:23:03.353356   12353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:23:03.367747   12353 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:23:03.512164   12353 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:23:03.650757   12353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:23:03.666983   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:23:03.688494   12353 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:23:03.688566   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.701349   12353 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:23:03.701428   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.715725   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.728486   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.747516   12353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:23:03.759239   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.770359   12353 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.789434   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.800720   12353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:23:03.811266   12353 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:23:03.811332   12353 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:23:03.827668   12353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:23:03.838782   12353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:23:03.963880   12353 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:23:04.114123   12353 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:23:04.114211   12353 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:23:04.119626   12353 start.go:562] Will wait 60s for crictl version
	I0421 18:23:04.119682   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:23:04.123803   12353 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:23:04.165462   12353 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:23:04.165573   12353 ssh_runner.go:195] Run: crio --version
	I0421 18:23:04.196870   12353 ssh_runner.go:195] Run: crio --version
	I0421 18:23:04.229837   12353 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:23:04.231352   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:04.234111   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:04.234416   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:04.234451   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:04.234620   12353 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:23:04.239188   12353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:23:04.252618   12353 kubeadm.go:877] updating cluster {Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:23:04.252802   12353 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:23:04.252862   12353 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:23:04.292502   12353 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 18:23:04.292573   12353 ssh_runner.go:195] Run: which lz4
	I0421 18:23:04.297062   12353 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 18:23:04.301681   12353 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 18:23:04.301717   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 18:23:05.912822   12353 crio.go:462] duration metric: took 1.615791433s to copy over tarball
	I0421 18:23:05.912906   12353 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 18:23:08.547171   12353 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634239198s)
	I0421 18:23:08.547198   12353 crio.go:469] duration metric: took 2.634350292s to extract the tarball
	I0421 18:23:08.547208   12353 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 18:23:08.587022   12353 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:23:08.637424   12353 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:23:08.637447   12353 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:23:08.637457   12353 kubeadm.go:928] updating node { 192.168.39.51 8443 v1.30.0 crio true true} ...
	I0421 18:23:08.637573   12353 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-337450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:23:08.637662   12353 ssh_runner.go:195] Run: crio config
	I0421 18:23:08.684573   12353 cni.go:84] Creating CNI manager for ""
	I0421 18:23:08.684596   12353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:23:08.684608   12353 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:23:08.684627   12353 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-337450 NodeName:addons-337450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:23:08.684750   12353 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-337450"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:23:08.684808   12353 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:23:08.696489   12353 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:23:08.696564   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 18:23:08.707350   12353 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0421 18:23:08.726272   12353 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:23:08.745532   12353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0421 18:23:08.764282   12353 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0421 18:23:08.768717   12353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:23:08.782658   12353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:23:08.910083   12353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:23:08.930315   12353 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450 for IP: 192.168.39.51
	I0421 18:23:08.930342   12353 certs.go:194] generating shared ca certs ...
	I0421 18:23:08.930363   12353 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:08.930522   12353 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:23:09.066629   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt ...
	I0421 18:23:09.066659   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt: {Name:mk5a664d977aab951980c9523c0f69eb4aa7a00f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.066826   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key ...
	I0421 18:23:09.066841   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key: {Name:mk3fcec5c20999d335d6a5dac5fc16bf27da2984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.066912   12353 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:23:09.179092   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt ...
	I0421 18:23:09.179120   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt: {Name:mkf45db38f5b63b2dcc8473373bea520935f8d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.179286   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key ...
	I0421 18:23:09.179298   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key: {Name:mk7e58d4cac388d3c1580b19b2d8fcf71f4dba03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.179370   12353 certs.go:256] generating profile certs ...
	I0421 18:23:09.179422   12353 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.key
	I0421 18:23:09.179436   12353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt with IP's: []
	I0421 18:23:09.422548   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt ...
	I0421 18:23:09.422575   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: {Name:mkb95799b2bcb246ea2be7e267ed3faffc78c639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.422731   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.key ...
	I0421 18:23:09.422742   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.key: {Name:mk74672d5b54e5c9788d1c06d12e69cbba120437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.422809   12353 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d
	I0421 18:23:09.422826   12353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.51]
	I0421 18:23:09.549381   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d ...
	I0421 18:23:09.549413   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d: {Name:mk87574e1d2cfde51605eb05f68cb97f5958443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.549577   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d ...
	I0421 18:23:09.549591   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d: {Name:mk456d7f4be5166d19cb0fa70f5d92c8d40a09ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.549663   12353 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt
	I0421 18:23:09.549756   12353 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key
	I0421 18:23:09.549806   12353 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key
	I0421 18:23:09.549823   12353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt with IP's: []
	I0421 18:23:09.642270   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt ...
	I0421 18:23:09.642308   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt: {Name:mkb6d378f694b6ad483fa038d205e6585b0f80ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.642463   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key ...
	I0421 18:23:09.642474   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key: {Name:mke7cbd9b1e9905297223b503bbc4c5986fcca05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.642619   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:23:09.642653   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:23:09.642679   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:23:09.642722   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:23:09.643336   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:23:09.677535   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:23:09.709139   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:23:09.742754   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:23:09.772085   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0421 18:23:09.802198   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:23:09.832739   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:23:09.861749   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 18:23:09.891829   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:23:09.923330   12353 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:23:09.945934   12353 ssh_runner.go:195] Run: openssl version
	I0421 18:23:09.953391   12353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:23:09.967785   12353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:23:09.973575   12353 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:23:09.973629   12353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:23:09.980758   12353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:23:09.995455   12353 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:23:10.000508   12353 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:23:10.000558   12353 kubeadm.go:391] StartCluster: {Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:23:10.000625   12353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 18:23:10.000667   12353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 18:23:10.042788   12353 cri.go:89] found id: ""
	I0421 18:23:10.042862   12353 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 18:23:10.055629   12353 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 18:23:10.068449   12353 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 18:23:10.080493   12353 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 18:23:10.080523   12353 kubeadm.go:156] found existing configuration files:
	
	I0421 18:23:10.080610   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 18:23:10.092266   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 18:23:10.092344   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 18:23:10.105313   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 18:23:10.117055   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 18:23:10.117107   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 18:23:10.132622   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 18:23:10.146228   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 18:23:10.146283   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 18:23:10.157504   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 18:23:10.167867   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 18:23:10.167932   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 18:23:10.178596   12353 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 18:23:10.372860   12353 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 18:23:20.562049   12353 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 18:23:20.562181   12353 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 18:23:20.562288   12353 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 18:23:20.562416   12353 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 18:23:20.562543   12353 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 18:23:20.562740   12353 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 18:23:20.564408   12353 out.go:204]   - Generating certificates and keys ...
	I0421 18:23:20.564474   12353 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 18:23:20.564523   12353 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 18:23:20.564586   12353 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 18:23:20.564653   12353 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 18:23:20.564717   12353 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 18:23:20.564775   12353 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 18:23:20.564871   12353 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 18:23:20.565030   12353 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-337450 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0421 18:23:20.565112   12353 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 18:23:20.565265   12353 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-337450 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0421 18:23:20.565322   12353 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 18:23:20.565373   12353 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 18:23:20.565424   12353 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 18:23:20.565467   12353 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 18:23:20.565508   12353 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 18:23:20.565552   12353 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 18:23:20.565594   12353 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 18:23:20.565646   12353 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 18:23:20.565688   12353 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 18:23:20.565758   12353 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 18:23:20.565813   12353 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 18:23:20.567163   12353 out.go:204]   - Booting up control plane ...
	I0421 18:23:20.567233   12353 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 18:23:20.567310   12353 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 18:23:20.567377   12353 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 18:23:20.567466   12353 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 18:23:20.567559   12353 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 18:23:20.567605   12353 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 18:23:20.567724   12353 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 18:23:20.567792   12353 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 18:23:20.567851   12353 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.043039ms
	I0421 18:23:20.567913   12353 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 18:23:20.567969   12353 kubeadm.go:309] [api-check] The API server is healthy after 5.502975082s
	I0421 18:23:20.568080   12353 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 18:23:20.568226   12353 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 18:23:20.568319   12353 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 18:23:20.568570   12353 kubeadm.go:309] [mark-control-plane] Marking the node addons-337450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 18:23:20.568624   12353 kubeadm.go:309] [bootstrap-token] Using token: intyc2.kpq50nnam4k5x17k
	I0421 18:23:20.570983   12353 out.go:204]   - Configuring RBAC rules ...
	I0421 18:23:20.571065   12353 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 18:23:20.571164   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 18:23:20.571312   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 18:23:20.571475   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 18:23:20.571641   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 18:23:20.571714   12353 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 18:23:20.571814   12353 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 18:23:20.571854   12353 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 18:23:20.571901   12353 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 18:23:20.571915   12353 kubeadm.go:309] 
	I0421 18:23:20.571960   12353 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 18:23:20.571965   12353 kubeadm.go:309] 
	I0421 18:23:20.572022   12353 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 18:23:20.572028   12353 kubeadm.go:309] 
	I0421 18:23:20.572054   12353 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 18:23:20.572109   12353 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 18:23:20.572149   12353 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 18:23:20.572155   12353 kubeadm.go:309] 
	I0421 18:23:20.572196   12353 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 18:23:20.572202   12353 kubeadm.go:309] 
	I0421 18:23:20.572240   12353 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 18:23:20.572246   12353 kubeadm.go:309] 
	I0421 18:23:20.572287   12353 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 18:23:20.572355   12353 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 18:23:20.572412   12353 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 18:23:20.572421   12353 kubeadm.go:309] 
	I0421 18:23:20.572487   12353 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 18:23:20.572548   12353 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 18:23:20.572554   12353 kubeadm.go:309] 
	I0421 18:23:20.572618   12353 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token intyc2.kpq50nnam4k5x17k \
	I0421 18:23:20.572704   12353 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 18:23:20.572736   12353 kubeadm.go:309] 	--control-plane 
	I0421 18:23:20.572746   12353 kubeadm.go:309] 
	I0421 18:23:20.572811   12353 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 18:23:20.572820   12353 kubeadm.go:309] 
	I0421 18:23:20.572884   12353 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token intyc2.kpq50nnam4k5x17k \
	I0421 18:23:20.572973   12353 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 18:23:20.572982   12353 cni.go:84] Creating CNI manager for ""
	I0421 18:23:20.572988   12353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:23:20.574483   12353 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 18:23:20.575616   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 18:23:20.588196   12353 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 18:23:20.615993   12353 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 18:23:20.616113   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:20.616205   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-337450 minikube.k8s.io/updated_at=2024_04_21T18_23_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=addons-337450 minikube.k8s.io/primary=true
	I0421 18:23:20.642160   12353 ops.go:34] apiserver oom_adj: -16
	I0421 18:23:20.779027   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:21.279229   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:21.779104   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:22.280005   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:22.779196   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:23.279473   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:23.779537   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:24.279267   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:24.780099   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:25.279158   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:25.779797   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:26.279210   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:26.779782   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:27.279932   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:27.779775   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:28.279661   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:28.779170   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:29.279304   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:29.779891   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:30.279572   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:30.779479   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:31.279995   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:31.779482   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:32.279914   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:32.779906   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:33.279456   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:33.424816   12353 kubeadm.go:1107] duration metric: took 12.808775729s to wait for elevateKubeSystemPrivileges
	W0421 18:23:33.424877   12353 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 18:23:33.424888   12353 kubeadm.go:393] duration metric: took 23.424333542s to StartCluster
	I0421 18:23:33.424913   12353 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:33.425074   12353 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:23:33.425591   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:33.425774   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 18:23:33.425796   12353 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:23:33.427605   12353 out.go:177] * Verifying Kubernetes components...
	I0421 18:23:33.425860   12353 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0421 18:23:33.426036   12353 config.go:182] Loaded profile config "addons-337450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:23:33.429046   12353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:23:33.429082   12353 addons.go:69] Setting yakd=true in profile "addons-337450"
	I0421 18:23:33.429094   12353 addons.go:69] Setting cloud-spanner=true in profile "addons-337450"
	I0421 18:23:33.429112   12353 addons.go:69] Setting helm-tiller=true in profile "addons-337450"
	I0421 18:23:33.429121   12353 addons.go:69] Setting inspektor-gadget=true in profile "addons-337450"
	I0421 18:23:33.429134   12353 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-337450"
	I0421 18:23:33.429143   12353 addons.go:234] Setting addon cloud-spanner=true in "addons-337450"
	I0421 18:23:33.429148   12353 addons.go:234] Setting addon helm-tiller=true in "addons-337450"
	I0421 18:23:33.429153   12353 addons.go:234] Setting addon inspektor-gadget=true in "addons-337450"
	I0421 18:23:33.429162   12353 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-337450"
	I0421 18:23:33.429157   12353 addons.go:69] Setting registry=true in profile "addons-337450"
	I0421 18:23:33.429183   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429125   12353 addons.go:69] Setting storage-provisioner=true in profile "addons-337450"
	I0421 18:23:33.429194   12353 addons.go:234] Setting addon registry=true in "addons-337450"
	I0421 18:23:33.429185   12353 addons.go:69] Setting volumesnapshots=true in profile "addons-337450"
	I0421 18:23:33.429204   12353 addons.go:234] Setting addon storage-provisioner=true in "addons-337450"
	I0421 18:23:33.429213   12353 addons.go:69] Setting gcp-auth=true in profile "addons-337450"
	I0421 18:23:33.429194   12353 addons.go:69] Setting default-storageclass=true in profile "addons-337450"
	I0421 18:23:33.429233   12353 addons.go:69] Setting metrics-server=true in profile "addons-337450"
	I0421 18:23:33.429244   12353 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-337450"
	I0421 18:23:33.429250   12353 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-337450"
	I0421 18:23:33.429262   12353 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-337450"
	I0421 18:23:33.429295   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429115   12353 addons.go:234] Setting addon yakd=true in "addons-337450"
	I0421 18:23:33.429384   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429084   12353 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-337450"
	I0421 18:23:33.429459   12353 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-337450"
	I0421 18:23:33.429486   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429595   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429605   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429622   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429629   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429655   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429665   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429183   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429253   12353 addons.go:234] Setting addon metrics-server=true in "addons-337450"
	I0421 18:23:33.429245   12353 mustload.go:65] Loading cluster: addons-337450
	I0421 18:23:33.429794   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429800   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429820   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429898   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429934   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429951   12353 config.go:182] Loaded profile config "addons-337450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:23:33.430075   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429183   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430098   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429635   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429216   12353 addons.go:234] Setting addon volumesnapshots=true in "addons-337450"
	I0421 18:23:33.430237   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430295   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.430321   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.430459   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.430493   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429224   12353 addons.go:69] Setting ingress=true in profile "addons-337450"
	I0421 18:23:33.430565   12353 addons.go:234] Setting addon ingress=true in "addons-337450"
	I0421 18:23:33.430603   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430923   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.430941   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.431006   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.431159   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.431191   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429232   12353 addons.go:69] Setting ingress-dns=true in profile "addons-337450"
	I0421 18:23:33.431280   12353 addons.go:234] Setting addon ingress-dns=true in "addons-337450"
	I0421 18:23:33.431311   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429224   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430193   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.431612   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429230   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.450140   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46223
	I0421 18:23:33.450223   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0421 18:23:33.451165   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.451219   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.451715   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.451728   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.451737   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.451743   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.452107   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.452251   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.452301   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.452796   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.452829   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.454710   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0421 18:23:33.455127   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.455643   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.455672   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.456006   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.456189   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.456658   12353 addons.go:234] Setting addon default-storageclass=true in "addons-337450"
	I0421 18:23:33.456702   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.457102   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.457137   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.458686   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.458733   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.458968   12353 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-337450"
	I0421 18:23:33.459014   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.459344   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.459389   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.459356   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.459471   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.459888   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.459923   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.462921   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0421 18:23:33.463129   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I0421 18:23:33.463383   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.463827   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.463845   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.464046   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.464317   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.464830   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.464855   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.465449   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.465466   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.465863   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.466378   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.466418   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.480850   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I0421 18:23:33.481544   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.482246   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.482268   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.482619   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.482775   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38863
	I0421 18:23:33.482936   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0421 18:23:33.483235   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.483312   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.483324   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.483360   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.483592   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0421 18:23:33.483727   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.483746   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.483745   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.483793   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.484072   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.484445   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.484652   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.484734   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.484778   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.485141   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.485186   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.486848   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.486866   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.487218   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.493151   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.493182   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.496182   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0421 18:23:33.496347   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0421 18:23:33.496872   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.497381   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.497405   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.497738   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.497911   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.499701   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.500293   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.500892   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.500914   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.502030   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.502359   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.504184   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.504582   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.504602   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.507251   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I0421 18:23:33.509482   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0421 18:23:33.508065   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.512185   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0421 18:23:33.511376   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.513533   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.513591   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0421 18:23:33.514039   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.515232   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0421 18:23:33.516857   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0421 18:23:33.515911   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.519977   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0421 18:23:33.521197   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0421 18:23:33.520218   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.520955   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0421 18:23:33.523749   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0421 18:23:33.524992   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0421 18:23:33.525008   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0421 18:23:33.525029   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.526268   12353 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0421 18:23:33.523157   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.523566   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0421 18:23:33.525907   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0421 18:23:33.527317   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0421 18:23:33.527895   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0421 18:23:33.528419   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0421 18:23:33.528432   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0421 18:23:33.528451   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.528246   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.530694   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.530708   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38683
	I0421 18:23:33.530717   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.530698   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.530827   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.530855   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.531113   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.531142   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.531152   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.531506   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.531553   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.531640   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.531647   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.531662   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.531674   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.532115   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.532229   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.532243   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.532398   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.532412   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.532476   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.532688   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.532758   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.533290   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.533299   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.533345   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.533351   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.533439   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.533452   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.533750   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.533769   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.533824   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.533848   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.534007   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.534224   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.534242   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.534268   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.534281   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.534297   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.534833   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.535025   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.535156   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.535260   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.535357   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.535787   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.535845   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.537860   12353 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0421 18:23:33.540222   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.537482   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.537593   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.539315   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0421 18:23:33.539675   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41667
	I0421 18:23:33.540504   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0421 18:23:33.541534   12353 out.go:177]   - Using image docker.io/busybox:stable
	I0421 18:23:33.542283   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.542285   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.542430   12353 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 18:23:33.542682   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.543338   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45107
	I0421 18:23:33.543530   12353 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0421 18:23:33.543636   12353 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0421 18:23:33.544132   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.544943   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.544996   12353 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0421 18:23:33.546446   12353 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0421 18:23:33.546461   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0421 18:23:33.546477   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.547994   12353 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0421 18:23:33.548011   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0421 18:23:33.548030   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.545131   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0421 18:23:33.548089   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.544230   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.548128   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.545142   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 18:23:33.548166   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.545969   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.548190   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.546111   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.546326   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.546689   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0421 18:23:33.549340   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.549358   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.549409   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.549500   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.550100   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.550136   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.550690   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.550697   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.550721   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.551178   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.551201   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.551271   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.551477   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0421 18:23:33.551500   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.551560   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.551988   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.552026   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.552240   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.552520   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.552624   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.552649   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.552750   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.553082   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.553118   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.553285   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.553446   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.553572   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.554342   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.554422   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.554436   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.554541   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.554569   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.555381   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.555411   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0421 18:23:33.555469   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.555498   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.555827   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.555847   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.555851   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.556115   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.556155   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.556222   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.556247   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.556415   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.556669   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.556732   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.556800   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.557016   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.557342   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.557360   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.557641   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0421 18:23:33.557782   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0421 18:23:33.558159   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.558577   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.558637   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.558649   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.558664   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.559032   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.559049   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.559114   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.559133   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.559223   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.559390   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.559433   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.559541   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.559566   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.559593   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.560162   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.560629   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.560828   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.561001   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.561479   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.561740   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.563846   12353 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0421 18:23:33.565085   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0421 18:23:33.562847   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.566305   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0421 18:23:33.568530   12353 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0421 18:23:33.567448   12353 out.go:177]   - Using image docker.io/registry:2.8.3
	I0421 18:23:33.567465   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0421 18:23:33.569782   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.569927   12353 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0421 18:23:33.569936   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0421 18:23:33.569962   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.572118   12353 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0421 18:23:33.572135   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0421 18:23:33.572156   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.574934   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.575680   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.576500   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.576526   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.576758   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.576783   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.576960   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.577135   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.577181   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.577349   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.577399   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.577643   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.577924   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.578037   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0421 18:23:33.578293   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.578520   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.578548   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.578840   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.578871   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.578940   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.579112   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.579235   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.579347   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.579684   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.579693   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.580019   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.580186   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.581511   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.583621   12353 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0421 18:23:33.585030   12353 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0421 18:23:33.585045   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0421 18:23:33.585062   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.587674   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.587996   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.588009   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.588129   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.588297   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.588460   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.588605   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.590892   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0421 18:23:33.591341   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.592402   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.592421   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.592789   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.592865   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36299
	I0421 18:23:33.592889   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I0421 18:23:33.593335   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.593347   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.593394   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.594305   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.594320   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.594322   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.594337   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.594685   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.594919   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.595156   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.595316   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.595530   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
	I0421 18:23:33.595965   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.596771   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.596921   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.596934   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.598969   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0421 18:23:33.597295   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.597316   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.597905   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.602663   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:23:33.601374   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.604242   12353 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:23:33.605161   12353 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:23:33.605175   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 18:23:33.605191   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.610201   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:23:33.604295   12353 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0421 18:23:33.606940   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.608259   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.608835   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.611691   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.612979   12353 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0421 18:23:33.612991   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0421 18:23:33.611785   12353 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0421 18:23:33.613021   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0421 18:23:33.613039   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.611812   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.613086   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.614473   12353 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0421 18:23:33.611969   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.613005   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.615803   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 18:23:33.615815   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 18:23:33.615834   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.616021   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.616289   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.616836   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.616859   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.618306   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.618465   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.618626   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.618749   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.619539   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.619914   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.619933   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.620031   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.620144   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.620253   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.620342   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.620524   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.620792   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.620822   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.621050   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.621162   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.621286   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.621394   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.967147   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0421 18:23:33.984189   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0421 18:23:33.984212   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0421 18:23:34.020380   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0421 18:23:34.134156   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:23:34.167279   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0421 18:23:34.179870   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0421 18:23:34.225203   12353 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0421 18:23:34.225226   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0421 18:23:34.227900   12353 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0421 18:23:34.227920   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0421 18:23:34.245806   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 18:23:34.255276   12353 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0421 18:23:34.255303   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0421 18:23:34.258271   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 18:23:34.258291   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0421 18:23:34.261142   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0421 18:23:34.261158   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0421 18:23:34.275699   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0421 18:23:34.284031   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0421 18:23:34.284050   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0421 18:23:34.310635   12353 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0421 18:23:34.310658   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0421 18:23:34.376390   12353 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0421 18:23:34.376409   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0421 18:23:34.405862   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 18:23:34.405885   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 18:23:34.430877   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0421 18:23:34.430904   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0421 18:23:34.463086   12353 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.037283077s)
	I0421 18:23:34.463122   12353 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.034051205s)
	I0421 18:23:34.463183   12353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:23:34.463254   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 18:23:34.477680   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0421 18:23:34.477700   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0421 18:23:34.497615   12353 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0421 18:23:34.497638   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0421 18:23:34.569616   12353 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0421 18:23:34.569639   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0421 18:23:34.574884   12353 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0421 18:23:34.574902   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0421 18:23:34.598290   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0421 18:23:34.598308   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0421 18:23:34.627119   12353 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0421 18:23:34.627144   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0421 18:23:34.664234   12353 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0421 18:23:34.664256   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0421 18:23:34.707872   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 18:23:34.707897   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 18:23:34.726284   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0421 18:23:34.795968   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0421 18:23:34.795989   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0421 18:23:34.856966   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 18:23:34.905616   12353 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0421 18:23:34.905648   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0421 18:23:34.913190   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0421 18:23:34.994288   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0421 18:23:34.994318   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0421 18:23:35.013022   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0421 18:23:35.013045   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0421 18:23:35.090172   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0421 18:23:35.090190   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0421 18:23:35.161726   12353 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0421 18:23:35.161749   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0421 18:23:35.338273   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0421 18:23:35.338294   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0421 18:23:35.429232   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0421 18:23:35.476772   12353 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:23:35.476793   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0421 18:23:35.585415   12353 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0421 18:23:35.585437   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0421 18:23:35.619649   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0421 18:23:35.619675   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0421 18:23:35.864060   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.896878725s)
	I0421 18:23:35.864110   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:35.864119   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:35.864431   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:35.864451   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:35.864462   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:35.864475   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:35.864697   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:35.864723   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:35.864742   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:35.899840   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:23:35.980104   12353 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0421 18:23:35.980133   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0421 18:23:35.985940   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0421 18:23:35.985971   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0421 18:23:36.285515   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0421 18:23:36.362818   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0421 18:23:36.362845   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0421 18:23:36.602741   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0421 18:23:36.602772   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0421 18:23:37.106348   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0421 18:23:37.905532   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.885113672s)
	I0421 18:23:37.905586   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:37.905598   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:37.905895   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:37.905919   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:37.905929   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:37.905937   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:37.906219   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:37.906236   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:37.906255   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.034543   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.900354544s)
	I0421 18:23:39.034581   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.034589   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.034608   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.867298277s)
	I0421 18:23:39.034650   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.034666   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.034895   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.034910   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:39.034919   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.034927   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.034951   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.034969   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.034980   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:39.034993   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.035000   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.035054   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.035179   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.035187   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:39.035317   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.035329   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.035344   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:40.570318   12353 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0421 18:23:40.570358   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:40.574148   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:40.574587   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:40.574611   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:40.574800   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:40.575024   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:40.575193   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:40.575332   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:41.317473   12353 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0421 18:23:41.386155   12353 addons.go:234] Setting addon gcp-auth=true in "addons-337450"
	I0421 18:23:41.386213   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:41.386564   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:41.386594   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:41.402217   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0421 18:23:41.402723   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:41.403184   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:41.403212   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:41.403559   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:41.404138   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:41.404193   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:41.418969   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0421 18:23:41.419374   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:41.419870   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:41.419890   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:41.420236   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:41.420436   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:41.421949   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:41.422214   12353 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0421 18:23:41.422241   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:41.424969   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:41.425342   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:41.425368   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:41.425552   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:41.425735   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:41.425910   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:41.426050   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:43.086166   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.906249647s)
	I0421 18:23:43.086191   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.840352565s)
	I0421 18:23:43.086224   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086227   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086238   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086241   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086276   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.810546541s)
	I0421 18:23:43.086301   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086304   12353 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.623026687s)
	I0421 18:23:43.086318   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086322   12353 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.623121369s)
	I0421 18:23:43.086362   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.360042892s)
	I0421 18:23:43.086324   12353 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0421 18:23:43.086381   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086391   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086423   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.229423311s)
	I0421 18:23:43.086440   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086451   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086463   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.17324988s)
	I0421 18:23:43.086479   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086488   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086504   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086529   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086535   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086543   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086549   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086551   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.657293307s)
	I0421 18:23:43.086565   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086573   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086692   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.186820688s)
	W0421 18:23:43.086718   12353 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0421 18:23:43.086733   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086735   12353 retry.go:31] will retry after 241.171317ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0421 18:23:43.086761   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086777   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086786   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086789   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086796   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086800   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086804   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086807   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086812   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086824   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.801276028s)
	I0421 18:23:43.086839   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086848   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086907   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086929   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086937   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086944   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086952   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.087273   12353 node_ready.go:35] waiting up to 6m0s for node "addons-337450" to be "Ready" ...
	I0421 18:23:43.087387   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087407   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087414   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087430   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.087437   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.087442   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.087445   12353 addons.go:470] Verifying addon ingress=true in "addons-337450"
	I0421 18:23:43.087449   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.087458   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.087465   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.087465   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.087475   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.087483   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.087490   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.090794   12353 out.go:177] * Verifying ingress addon...
	I0421 18:23:43.087514   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087530   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.088477   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.088501   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.088520   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.088536   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.088558   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.089402   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.089432   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.089450   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.089466   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.089478   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.089493   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.091992   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092001   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092022   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092033   12353 addons.go:470] Verifying addon registry=true in "addons-337450"
	I0421 18:23:43.092037   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092040   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.093389   12353 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-337450 service yakd-dashboard -n yakd-dashboard
	
	I0421 18:23:43.092080   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.092103   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092110   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092870   12353 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0421 18:23:43.094856   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.096182   12353 out.go:177] * Verifying registry addon...
	I0421 18:23:43.096187   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.097782   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.096382   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.097829   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.096394   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.098024   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.098037   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.098045   12353 addons.go:470] Verifying addon metrics-server=true in "addons-337450"
	I0421 18:23:43.098047   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.098553   12353 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0421 18:23:43.156420   12353 node_ready.go:49] node "addons-337450" has status "Ready":"True"
	I0421 18:23:43.156446   12353 node_ready.go:38] duration metric: took 69.15647ms for node "addons-337450" to be "Ready" ...
	I0421 18:23:43.156455   12353 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:23:43.178622   12353 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0421 18:23:43.178656   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:43.178822   12353 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0421 18:23:43.178846   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:43.234367   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.234396   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.234680   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.234728   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.234744   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	W0421 18:23:43.234835   12353 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0421 18:23:43.266513   12353 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:43.308816   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.308835   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.309121   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.309137   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.328590   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:23:43.593277   12353 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-337450" context rescaled to 1 replicas
	I0421 18:23:43.612050   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:43.614849   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:44.101274   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:44.103339   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:44.606005   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:44.610657   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:45.038455   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.932041101s)
	I0421 18:23:45.038501   12353 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.616263533s)
	I0421 18:23:45.038516   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.038529   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.040735   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:23:45.038873   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:45.038818   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.042609   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.042624   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.042638   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.044138   12353 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0421 18:23:45.042906   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:45.042914   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.045788   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.045800   12353 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-337450"
	I0421 18:23:45.045837   12353 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0421 18:23:45.045855   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0421 18:23:45.047616   12353 out.go:177] * Verifying csi-hostpath-driver addon...
	I0421 18:23:45.049608   12353 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0421 18:23:45.076238   12353 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0421 18:23:45.076257   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:45.101984   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:45.106819   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:45.163079   12353 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0421 18:23:45.163101   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0421 18:23:45.207937   12353 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0421 18:23:45.207955   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0421 18:23:45.287700   12353 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:45.323023   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.994392183s)
	I0421 18:23:45.323075   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.323086   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.323330   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:45.323366   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.323393   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.323407   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.323421   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.323788   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.323808   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.373965   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0421 18:23:45.556285   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:45.607647   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:45.610148   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:46.054989   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:46.101026   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:46.103456   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:46.578953   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:46.628231   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.254223479s)
	I0421 18:23:46.628285   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:46.628303   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:46.628687   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:46.628715   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:46.628726   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:46.628734   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:46.628692   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:46.629010   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:46.629029   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:46.630468   12353 addons.go:470] Verifying addon gcp-auth=true in "addons-337450"
	I0421 18:23:46.632027   12353 out.go:177] * Verifying gcp-auth addon...
	I0421 18:23:46.634214   12353 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0421 18:23:46.653243   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:46.653476   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:46.679705   12353 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0421 18:23:46.679724   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:47.057266   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:47.104528   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:47.111483   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:47.149162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:47.555655   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:47.609968   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:47.617750   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:47.638889   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:47.772833   12353 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:48.056025   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:48.100320   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:48.103582   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:48.137414   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:48.563553   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:48.631810   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:48.642616   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:48.652846   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:49.055963   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:49.101577   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:49.103601   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:49.137322   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:49.556050   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:49.601148   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:49.605399   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:49.638185   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:50.055640   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:50.101635   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:50.105183   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:50.138083   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:50.274375   12353 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:50.559259   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:50.603655   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:50.604108   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:50.638115   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:51.056036   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:51.100281   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:51.103156   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:51.139677   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:51.556605   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:51.601167   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:51.603500   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:51.639007   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:52.055903   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:52.101400   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:52.104181   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:52.139201   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:52.281461   12353 pod_ready.go:97] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.51 HostIPs:[{IP:192.168.39.
51}] PodIP: PodIPs:[] StartTime:2024-04-21 18:23:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 18:23:36 +0000 UTC,FinishedAt:2024-04-21 18:23:48 +0000 UTC,ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf Started:0xc0021dda50 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 18:23:52.281494   12353 pod_ready.go:81] duration metric: took 9.014957574s for pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace to be "Ready" ...
	E0421 18:23:52.281506   12353 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.51 HostIPs:[{IP:192.168.39.51}] PodIP: PodIPs:[] StartTime:2024-04-21 18:23:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 18:23:36 +0000 UTC,FinishedAt:2024-04-21 18:23:48 +0000 UTC,ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf Started:0xc0021dda50 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 18:23:52.281514   12353 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zkbzm" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.288121   12353 pod_ready.go:92] pod "coredns-7db6d8ff4d-zkbzm" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.288144   12353 pod_ready.go:81] duration metric: took 6.620519ms for pod "coredns-7db6d8ff4d-zkbzm" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.288154   12353 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.299535   12353 pod_ready.go:92] pod "etcd-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.299564   12353 pod_ready.go:81] duration metric: took 11.399605ms for pod "etcd-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.299577   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.318918   12353 pod_ready.go:92] pod "kube-apiserver-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.318948   12353 pod_ready.go:81] duration metric: took 19.362263ms for pod "kube-apiserver-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.318962   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.339278   12353 pod_ready.go:92] pod "kube-controller-manager-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.339305   12353 pod_ready.go:81] duration metric: took 20.335162ms for pod "kube-controller-manager-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.339322   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n76l5" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.557143   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:52.603647   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:52.608439   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:52.638493   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:52.683576   12353 pod_ready.go:92] pod "kube-proxy-n76l5" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.683609   12353 pod_ready.go:81] duration metric: took 344.278927ms for pod "kube-proxy-n76l5" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.683623   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:53.057032   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:53.070604   12353 pod_ready.go:92] pod "kube-scheduler-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:53.070627   12353 pod_ready.go:81] duration metric: took 386.996836ms for pod "kube-scheduler-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:53.070637   12353 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:53.102028   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:53.104308   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:53.138617   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:53.556757   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:53.602824   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:53.605065   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:53.637564   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:54.056531   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:54.103001   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:54.103313   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:54.138174   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:54.556835   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:54.603380   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:54.605296   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:54.638817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:55.055187   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:55.077081   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:55.100868   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:55.103208   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:55.138761   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:55.558333   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:55.602972   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:55.604271   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:55.638001   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:56.071185   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:56.101366   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:56.107068   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:56.138688   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:56.558072   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:56.606658   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:56.608056   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:56.637737   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:57.062398   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:57.082900   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:57.102847   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:57.110360   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:57.138462   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:57.556374   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:57.601178   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:57.610030   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:57.639529   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:58.061977   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:58.114349   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:58.120278   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:58.137949   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:58.557836   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:58.600636   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:58.606249   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:58.638252   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:59.059075   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:59.100522   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:59.103777   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:59.138756   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:59.556947   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:59.577131   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:59.600742   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:59.607292   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:59.638312   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:00.062119   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:00.101224   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:00.105787   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:00.138091   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:00.696097   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:00.698664   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:00.700787   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:00.703569   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:01.056390   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:01.101026   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:01.103737   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:01.138639   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:01.562317   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:01.577182   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:01.603957   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:01.606157   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:01.638422   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:02.055907   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:02.100867   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:02.103270   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:02.137718   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:02.558906   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:02.603322   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:02.607952   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:02.637501   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:03.056331   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:03.101275   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:03.103605   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:03.139200   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:03.555768   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:03.601665   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:03.605366   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:03.638461   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:04.066742   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:04.094939   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:04.108168   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:04.108302   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:04.138320   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:04.555725   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:04.601861   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:04.603109   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:04.638462   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:05.056748   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:05.100501   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:05.105698   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:05.138802   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:05.560041   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:05.601617   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:05.604331   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:05.637777   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:06.056672   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:06.101935   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:06.103956   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:06.138521   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:06.557004   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:06.576773   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:06.602479   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:06.603600   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:06.639188   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:07.055162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:07.100485   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:07.106018   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:07.138774   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:07.557602   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:07.603313   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:07.607438   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:07.638382   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:08.056491   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:08.101266   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:08.104434   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:08.138603   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:08.556413   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:08.578559   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:08.601604   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:08.604211   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:08.640863   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:09.056457   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:09.101266   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:09.104354   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:09.138536   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:09.556493   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:09.601272   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:09.603817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:09.637511   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:10.056371   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:10.101090   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:10.103661   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:10.137883   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:10.557234   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:10.600885   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:10.603637   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:10.638340   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:11.170359   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:11.171402   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:11.171681   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:11.173995   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:11.174817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:11.556200   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:11.606113   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:11.610429   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:11.638644   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:12.058253   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:12.100185   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:12.102513   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:12.138804   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:12.555639   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:12.603472   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:12.605005   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:12.638795   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:13.055405   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:13.101151   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:13.104140   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:13.138939   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:13.560585   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:13.579659   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:13.600630   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:13.603583   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:13.638687   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:14.058191   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:14.104109   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:14.104455   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:14.138582   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:14.555348   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:14.602034   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:14.606171   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:14.638357   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:15.061433   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:15.100554   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:15.103050   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:15.139319   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:15.560992   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:15.602566   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:15.604318   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:15.641753   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:16.054884   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:16.076275   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:16.102141   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:16.104056   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:16.138817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:16.555789   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:16.603871   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:16.611133   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:16.638005   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:17.060005   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:17.101843   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:17.106462   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:17.139061   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:17.556222   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:17.602563   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:17.611443   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:17.639549   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:18.054927   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:18.076515   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:18.099842   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:18.102652   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:18.137699   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:18.558100   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:18.606548   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:18.607232   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:18.638274   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:19.057178   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:19.102360   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:19.104398   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:19.138595   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:19.557463   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:19.602523   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:19.604091   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:19.638391   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:20.056025   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:20.077401   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:20.101078   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:20.104410   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:20.138746   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:20.571619   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:20.606366   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:20.612716   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:20.638818   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:21.056536   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:21.100768   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:21.104326   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:21.138977   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:21.561862   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:21.602810   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:21.606689   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:21.637848   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:22.060873   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:22.082712   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:22.103166   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:22.117132   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:22.143839   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:22.555669   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:22.606212   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:22.608845   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:22.637881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:23.056270   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:23.100727   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:23.104039   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:23.138450   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:23.566618   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:23.603660   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:23.611460   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:23.638471   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:24.058211   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:24.103055   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:24.105937   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:24.138388   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:24.557875   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:24.576577   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:24.605303   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:24.608546   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:24.638576   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:25.055125   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:25.100696   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:25.104459   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:25.139721   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:25.555792   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:25.603554   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:25.608259   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:25.637927   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:26.055350   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:26.100787   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:26.105783   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:26.138541   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:26.898905   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:26.903269   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:26.906973   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:26.907364   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:26.907377   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:27.061964   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:27.101141   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:27.106127   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:27.138240   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:27.556739   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:27.603375   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:27.605590   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:27.637796   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:28.055612   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:28.101215   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:28.104162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:28.138197   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:28.556131   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:28.601103   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:28.605237   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:28.637922   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:29.060652   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:29.077489   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:29.100491   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:29.103963   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:29.138395   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:29.560694   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:29.601064   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:29.604451   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:29.638722   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:30.055385   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:30.112429   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:30.114583   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:30.139077   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:30.555669   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:30.608285   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:30.609116   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:30.638079   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:31.056026   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:31.078961   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:31.102489   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:31.103915   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:31.138087   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:31.557672   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:31.605874   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:31.606054   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:31.638242   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:32.055758   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:32.101369   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:32.103998   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:32.137937   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:32.554919   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:32.601603   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:32.605429   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:32.643261   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:33.065236   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:33.083526   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:33.100360   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:33.105555   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:33.139671   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:33.559140   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:33.601427   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:33.604603   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:33.637615   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:34.055896   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:34.101556   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:34.106965   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:34.137681   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:34.560000   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:34.601295   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:34.604360   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:34.638615   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:35.057620   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:35.101183   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:35.108510   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:35.138971   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:35.556473   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:35.577554   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:35.600451   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:35.606234   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:35.638694   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:36.055218   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:36.101130   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:36.104945   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:36.138002   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:36.831048   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:36.831653   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:36.833645   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:36.835219   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:37.056345   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:37.101156   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:37.109628   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:37.138656   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:37.560127   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:37.577943   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:37.601947   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:37.605853   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:37.637796   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:38.057331   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:38.100114   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:38.102592   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:38.137600   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:38.556673   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:38.603665   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:38.605713   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:38.637649   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:39.059322   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:39.109702   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:39.110157   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:39.155839   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:39.560687   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:39.586363   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:39.609653   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:39.617843   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:39.639070   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:40.055695   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:40.100096   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:40.102768   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:40.138561   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:40.557329   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:40.600509   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:40.605064   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:40.638028   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:41.064700   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:41.101135   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:41.103923   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:41.138376   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:41.556123   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:41.600813   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:41.608573   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:41.639637   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:42.055592   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:42.076544   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:42.100643   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:42.117983   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:42.137881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:42.556303   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:42.601208   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:42.604977   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:42.638091   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:43.359176   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:43.359898   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:43.361095   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:43.363486   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:43.557369   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:43.607763   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:43.610620   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:43.638476   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:44.056606   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:44.077398   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:44.099953   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:44.102454   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:44.138870   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:44.558944   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:44.611458   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:44.616146   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:44.643909   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:45.056028   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:45.100795   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:45.104777   12353 kapi.go:107] duration metric: took 1m2.006225704s to wait for kubernetes.io/minikube-addons=registry ...
	I0421 18:24:45.137779   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:45.554791   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:45.603547   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:45.638441   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:46.058881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:46.103912   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:46.138032   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:46.555643   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:46.577452   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:46.604969   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:46.644037   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:47.057166   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:47.101314   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:47.138118   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:47.559393   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:47.601230   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:47.637410   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:48.063352   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:48.100861   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:48.139045   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:48.557388   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:48.604333   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:48.637987   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:49.055893   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:49.077363   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:49.101914   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:49.138912   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:49.555283   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:49.601981   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:49.638472   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:50.056193   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:50.401603   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:50.402668   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:50.562741   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:50.612712   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:50.645160   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:51.057566   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:51.077713   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:51.100799   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:51.137902   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:51.555384   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:51.601435   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:51.638440   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:52.057268   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:52.103239   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:52.142920   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:52.563534   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:52.600378   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:52.637854   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:53.061913   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:53.087651   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:53.099993   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:53.137546   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:53.556762   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:53.600511   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:53.638317   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:54.055750   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:54.104535   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:54.148080   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:54.556622   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:54.601555   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:54.638617   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:55.066674   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:55.106961   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:55.128529   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:55.150436   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:55.563503   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:55.627496   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:55.654629   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:56.082556   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:56.118485   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:56.140635   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:56.557752   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:56.605754   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:56.639604   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:57.057978   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:57.101312   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:57.139498   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:57.564522   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:57.577735   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:57.600592   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:57.638873   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:58.064376   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:58.102720   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:58.481670   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:58.569881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:58.601961   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:58.638169   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:59.057213   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:59.105759   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:59.138539   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:59.557913   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:59.583436   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:59.613629   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:59.640649   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:00.056945   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:00.102411   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:00.138313   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:00.556148   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:00.600755   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:00.639307   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:01.056402   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:01.100893   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:01.138605   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:01.558577   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:01.612750   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:01.641092   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:02.056710   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:02.083002   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:02.102254   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:02.138786   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:02.555709   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:02.600684   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:02.637964   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:03.078243   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:03.195202   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:03.198137   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:03.559271   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:03.605538   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:03.638645   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:04.056288   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:04.099836   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:04.138821   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:04.561791   12353 kapi.go:107] duration metric: took 1m19.512183127s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0421 18:25:04.577287   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:04.601207   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:04.638417   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:05.109557   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:05.139454   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:05.601815   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:05.639047   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:06.101936   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:06.138355   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:06.578953   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:06.603740   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:06.639238   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:07.099911   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:07.137970   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:07.600812   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:07.637945   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:08.101711   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:08.138707   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:08.579850   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:08.604494   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:08.639303   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:09.100505   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:09.139440   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:09.601516   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:09.637542   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:10.101070   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:10.138432   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:10.601127   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:10.638029   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:11.077932   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:11.100720   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:11.138743   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:11.602174   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:11.638689   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:12.101424   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:12.139588   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:12.603119   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:12.639855   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:13.079758   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:13.101420   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:13.138573   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:13.603409   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:13.637812   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:14.100455   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:14.138146   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:14.605928   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:14.638198   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:15.101856   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:15.138748   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:15.578803   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:15.607205   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:15.638962   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:16.100851   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:16.138143   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:16.605271   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:16.638083   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:17.101220   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:17.138738   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:17.602686   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:17.639053   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:18.078777   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:18.100707   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:18.140501   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:18.601567   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:18.638727   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:19.100863   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:19.138231   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:19.601776   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:19.638524   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:20.101433   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:20.138227   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:20.580059   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:20.602708   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:20.639049   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:21.282600   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:21.283262   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:21.607910   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:21.637788   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:22.102247   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:22.138290   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:22.602114   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:22.637696   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:23.077366   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:23.101324   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:23.139476   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:23.602001   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:23.638374   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:24.101323   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:24.138547   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:24.601016   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:24.637972   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:25.101337   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:25.138647   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:25.576950   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:25.602268   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:25.638123   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:26.100959   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:26.137664   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:26.603010   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:26.637732   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:27.100441   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:27.138207   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:27.577917   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:27.602085   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:27.637745   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:28.100802   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:28.139093   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:28.604274   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:28.638602   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:29.101255   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:29.138303   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:29.578245   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:29.602901   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:29.638905   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:30.103243   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:30.137566   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:30.580206   12353 pod_ready.go:92] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"True"
	I0421 18:25:30.580231   12353 pod_ready.go:81] duration metric: took 1m37.509588555s for pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace to be "Ready" ...
	I0421 18:25:30.580241   12353 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hggr8" in "kube-system" namespace to be "Ready" ...
	I0421 18:25:30.588202   12353 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-hggr8" in "kube-system" namespace has status "Ready":"True"
	I0421 18:25:30.588220   12353 pod_ready.go:81] duration metric: took 7.973227ms for pod "nvidia-device-plugin-daemonset-hggr8" in "kube-system" namespace to be "Ready" ...
	I0421 18:25:30.588238   12353 pod_ready.go:38] duration metric: took 1m47.43177281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:25:30.588255   12353 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:25:30.588302   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 18:25:30.588354   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 18:25:30.600757   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:30.638842   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:30.654484   12353 cri.go:89] found id: "fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:30.654505   12353 cri.go:89] found id: ""
	I0421 18:25:30.654515   12353 logs.go:276] 1 containers: [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8]
	I0421 18:25:30.654567   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.661089   12353 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 18:25:30.661171   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 18:25:30.700957   12353 cri.go:89] found id: "dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:30.700975   12353 cri.go:89] found id: ""
	I0421 18:25:30.700982   12353 logs.go:276] 1 containers: [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead]
	I0421 18:25:30.701037   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.705882   12353 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 18:25:30.705957   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 18:25:30.746323   12353 cri.go:89] found id: "5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:30.746345   12353 cri.go:89] found id: ""
	I0421 18:25:30.746354   12353 logs.go:276] 1 containers: [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab]
	I0421 18:25:30.746401   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.751039   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 18:25:30.751112   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 18:25:30.792285   12353 cri.go:89] found id: "eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:30.792308   12353 cri.go:89] found id: ""
	I0421 18:25:30.792327   12353 logs.go:276] 1 containers: [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4]
	I0421 18:25:30.792386   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.796968   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 18:25:30.797021   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 18:25:30.848234   12353 cri.go:89] found id: "7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:30.848259   12353 cri.go:89] found id: ""
	I0421 18:25:30.848269   12353 logs.go:276] 1 containers: [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1]
	I0421 18:25:30.848326   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.853159   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 18:25:30.853223   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 18:25:30.894417   12353 cri.go:89] found id: "78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:30.894443   12353 cri.go:89] found id: ""
	I0421 18:25:30.894452   12353 logs.go:276] 1 containers: [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af]
	I0421 18:25:30.894510   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.899109   12353 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 18:25:30.899177   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 18:25:30.938505   12353 cri.go:89] found id: ""
	I0421 18:25:30.938535   12353 logs.go:276] 0 containers: []
	W0421 18:25:30.938545   12353 logs.go:278] No container was found matching "kindnet"
	I0421 18:25:30.938555   12353 logs.go:123] Gathering logs for dmesg ...
	I0421 18:25:30.938568   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 18:25:30.954688   12353 logs.go:123] Gathering logs for describe nodes ...
	I0421 18:25:30.954715   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0421 18:25:31.109571   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:31.128037   12353 logs.go:123] Gathering logs for kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] ...
	I0421 18:25:31.128080   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:31.155811   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:31.213189   12353 logs.go:123] Gathering logs for etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] ...
	I0421 18:25:31.213219   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:31.281895   12353 logs.go:123] Gathering logs for kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] ...
	I0421 18:25:31.281927   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:31.347198   12353 logs.go:123] Gathering logs for CRI-O ...
	I0421 18:25:31.347229   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 18:25:31.602213   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:31.638141   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:32.101596   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:32.138023   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:32.241489   12353 logs.go:123] Gathering logs for container status ...
	I0421 18:25:32.241541   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 18:25:32.310770   12353 logs.go:123] Gathering logs for kubelet ...
	I0421 18:25:32.310798   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0421 18:25:32.365655   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:32.365815   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:32.404266   12353 logs.go:123] Gathering logs for coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] ...
	I0421 18:25:32.404304   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:32.446876   12353 logs.go:123] Gathering logs for kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] ...
	I0421 18:25:32.446900   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:32.491759   12353 logs.go:123] Gathering logs for kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] ...
	I0421 18:25:32.491791   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:32.563813   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:32.563842   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0421 18:25:32.563901   12353 out.go:239] X Problems detected in kubelet:
	W0421 18:25:32.563916   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:32.563927   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:32.563940   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:32.563951   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:25:32.601676   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:32.638572   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:33.101817   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:33.138937   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:33.602086   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:33.637545   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:34.101791   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:34.138808   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:34.602705   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:34.638229   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:35.101378   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:35.138162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:35.602155   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:35.637319   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:36.101665   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:36.138355   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:36.600889   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:36.638593   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:37.101869   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:37.139149   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:37.601615   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:37.638777   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:38.102675   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:38.138012   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:38.600720   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:38.637882   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:39.102213   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:39.138240   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:39.603636   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:39.638242   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:40.100875   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:40.138486   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:40.601610   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:40.638925   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:41.101500   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:41.138657   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:41.603751   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:41.639353   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:42.102026   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:42.137955   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:42.565265   12353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:25:42.590951   12353 api_server.go:72] duration metric: took 2m9.165125601s to wait for apiserver process to appear ...
	I0421 18:25:42.590982   12353 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:25:42.591020   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 18:25:42.591081   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 18:25:42.601367   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:42.638608   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:42.644189   12353 cri.go:89] found id: "fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:42.644213   12353 cri.go:89] found id: ""
	I0421 18:25:42.644223   12353 logs.go:276] 1 containers: [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8]
	I0421 18:25:42.644286   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.651015   12353 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 18:25:42.651085   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 18:25:42.699231   12353 cri.go:89] found id: "dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:42.699257   12353 cri.go:89] found id: ""
	I0421 18:25:42.699266   12353 logs.go:276] 1 containers: [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead]
	I0421 18:25:42.699313   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.704853   12353 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 18:25:42.704924   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 18:25:42.747617   12353 cri.go:89] found id: "5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:42.747638   12353 cri.go:89] found id: ""
	I0421 18:25:42.747645   12353 logs.go:276] 1 containers: [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab]
	I0421 18:25:42.747688   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.752457   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 18:25:42.752515   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 18:25:42.792807   12353 cri.go:89] found id: "eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:42.792833   12353 cri.go:89] found id: ""
	I0421 18:25:42.792843   12353 logs.go:276] 1 containers: [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4]
	I0421 18:25:42.792903   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.797425   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 18:25:42.797479   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 18:25:42.839251   12353 cri.go:89] found id: "7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:42.839278   12353 cri.go:89] found id: ""
	I0421 18:25:42.839287   12353 logs.go:276] 1 containers: [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1]
	I0421 18:25:42.839349   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.844625   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 18:25:42.844686   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 18:25:42.886572   12353 cri.go:89] found id: "78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:42.886589   12353 cri.go:89] found id: ""
	I0421 18:25:42.886596   12353 logs.go:276] 1 containers: [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af]
	I0421 18:25:42.886642   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.892133   12353 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 18:25:42.892204   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 18:25:42.939974   12353 cri.go:89] found id: ""
	I0421 18:25:42.939998   12353 logs.go:276] 0 containers: []
	W0421 18:25:42.940005   12353 logs.go:278] No container was found matching "kindnet"
	I0421 18:25:42.940013   12353 logs.go:123] Gathering logs for etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] ...
	I0421 18:25:42.940024   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:43.007838   12353 logs.go:123] Gathering logs for coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] ...
	I0421 18:25:43.007873   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:43.051522   12353 logs.go:123] Gathering logs for dmesg ...
	I0421 18:25:43.051550   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 18:25:43.071873   12353 logs.go:123] Gathering logs for describe nodes ...
	I0421 18:25:43.071910   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0421 18:25:43.102177   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:43.139138   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:43.208753   12353 logs.go:123] Gathering logs for kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] ...
	I0421 18:25:43.208782   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:43.263934   12353 logs.go:123] Gathering logs for kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] ...
	I0421 18:25:43.263969   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:43.316732   12353 logs.go:123] Gathering logs for kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] ...
	I0421 18:25:43.316764   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:43.362398   12353 logs.go:123] Gathering logs for kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] ...
	I0421 18:25:43.362425   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:43.429062   12353 logs.go:123] Gathering logs for CRI-O ...
	I0421 18:25:43.429096   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 18:25:43.601489   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:43.637867   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:44.101793   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:44.138433   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:44.375733   12353 logs.go:123] Gathering logs for container status ...
	I0421 18:25:44.375770   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 18:25:44.439709   12353 logs.go:123] Gathering logs for kubelet ...
	I0421 18:25:44.439745   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0421 18:25:44.490405   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:44.490565   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:44.537966   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:44.537996   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0421 18:25:44.538045   12353 out.go:239] X Problems detected in kubelet:
	W0421 18:25:44.538053   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:44.538071   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:44.538083   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:44.538089   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:25:44.602452   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:44.638139   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:45.101836   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:45.137880   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:45.602110   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:45.639232   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:46.100758   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:46.138683   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:46.605301   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:46.638082   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:47.101047   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:47.137664   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:47.602183   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:47.638143   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:48.101254   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:48.138476   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:48.602088   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:48.638503   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:49.101282   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:49.137848   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:49.602914   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:49.637859   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:50.101703   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:50.138743   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:50.602138   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:50.639009   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:51.101354   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:51.138314   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:51.600538   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:51.638169   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:52.101888   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:52.137334   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:52.601688   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:52.638469   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:53.102214   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:53.137797   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:53.600996   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:53.637938   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:54.102194   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:54.138264   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:54.538433   12353 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0421 18:25:54.543321   12353 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0421 18:25:54.544406   12353 api_server.go:141] control plane version: v1.30.0
	I0421 18:25:54.544426   12353 api_server.go:131] duration metric: took 11.953437344s to wait for apiserver health ...
	I0421 18:25:54.544434   12353 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:25:54.544454   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 18:25:54.544498   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 18:25:54.588978   12353 cri.go:89] found id: "fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:54.589005   12353 cri.go:89] found id: ""
	I0421 18:25:54.589015   12353 logs.go:276] 1 containers: [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8]
	I0421 18:25:54.589068   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.594941   12353 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 18:25:54.595002   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 18:25:54.600837   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:54.638987   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:54.656136   12353 cri.go:89] found id: "dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:54.656162   12353 cri.go:89] found id: ""
	I0421 18:25:54.656172   12353 logs.go:276] 1 containers: [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead]
	I0421 18:25:54.656219   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.662030   12353 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 18:25:54.662113   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 18:25:54.706766   12353 cri.go:89] found id: "5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:54.706785   12353 cri.go:89] found id: ""
	I0421 18:25:54.706792   12353 logs.go:276] 1 containers: [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab]
	I0421 18:25:54.706842   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.711407   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 18:25:54.711470   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 18:25:54.755558   12353 cri.go:89] found id: "eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:54.755579   12353 cri.go:89] found id: ""
	I0421 18:25:54.755587   12353 logs.go:276] 1 containers: [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4]
	I0421 18:25:54.755646   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.760592   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 18:25:54.760665   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 18:25:54.814929   12353 cri.go:89] found id: "7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:54.814951   12353 cri.go:89] found id: ""
	I0421 18:25:54.814960   12353 logs.go:276] 1 containers: [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1]
	I0421 18:25:54.815010   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.820641   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 18:25:54.820702   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 18:25:54.873830   12353 cri.go:89] found id: "78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:54.873857   12353 cri.go:89] found id: ""
	I0421 18:25:54.873867   12353 logs.go:276] 1 containers: [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af]
	I0421 18:25:54.873933   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.879042   12353 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 18:25:54.879113   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 18:25:54.924037   12353 cri.go:89] found id: ""
	I0421 18:25:54.924067   12353 logs.go:276] 0 containers: []
	W0421 18:25:54.924075   12353 logs.go:278] No container was found matching "kindnet"
	I0421 18:25:54.924083   12353 logs.go:123] Gathering logs for kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] ...
	I0421 18:25:54.924095   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:54.984377   12353 logs.go:123] Gathering logs for CRI-O ...
	I0421 18:25:54.984405   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 18:25:55.102081   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:55.139140   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:55.601698   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:55.638589   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:55.795107   12353 logs.go:123] Gathering logs for dmesg ...
	I0421 18:25:55.795145   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 18:25:55.815458   12353 logs.go:123] Gathering logs for describe nodes ...
	I0421 18:25:55.815485   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0421 18:25:55.941960   12353 logs.go:123] Gathering logs for coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] ...
	I0421 18:25:55.941985   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:55.993773   12353 logs.go:123] Gathering logs for kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] ...
	I0421 18:25:55.993797   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:56.046574   12353 logs.go:123] Gathering logs for kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] ...
	I0421 18:25:56.046604   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:56.095135   12353 logs.go:123] Gathering logs for kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] ...
	I0421 18:25:56.095164   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:56.101983   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:56.138255   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:56.164648   12353 logs.go:123] Gathering logs for container status ...
	I0421 18:25:56.164680   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 18:25:56.217362   12353 logs.go:123] Gathering logs for kubelet ...
	I0421 18:25:56.217395   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0421 18:25:56.268048   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:56.268208   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:56.308920   12353 logs.go:123] Gathering logs for etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] ...
	I0421 18:25:56.308958   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:56.376367   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:56.376401   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0421 18:25:56.376451   12353 out.go:239] X Problems detected in kubelet:
	W0421 18:25:56.376459   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:56.376466   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:56.376473   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:56.376478   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:25:56.601348   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:56.638193   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:57.100950   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:57.137856   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:57.601935   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:57.637857   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:58.102373   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:58.138235   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:58.601435   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:58.638615   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:59.101352   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:59.138410   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:59.600305   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:59.639445   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:00.101485   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:00.138791   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:00.601260   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:00.637627   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:01.101859   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:01.138820   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:01.706023   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:01.707060   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:02.101146   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:02.137784   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:02.601626   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:02.638068   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:03.102285   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:03.138487   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:03.602723   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:03.638382   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:04.102086   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:04.137787   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:04.603033   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:04.639703   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:05.102153   12353 kapi.go:107] duration metric: took 2m22.009281538s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0421 18:26:05.138321   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:05.638080   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:06.139134   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:06.389699   12353 system_pods.go:59] 18 kube-system pods found
	I0421 18:26:06.389731   12353 system_pods.go:61] "coredns-7db6d8ff4d-zkbzm" [404bcd18-a121-4e5f-8df6-8caccd78cec0] Running
	I0421 18:26:06.389736   12353 system_pods.go:61] "csi-hostpath-attacher-0" [861f5920-82bc-4203-aca8-d4d87a7fcf8d] Running
	I0421 18:26:06.389740   12353 system_pods.go:61] "csi-hostpath-resizer-0" [999d845e-8fac-4e5b-88d6-e2606bbb46ef] Running
	I0421 18:26:06.389743   12353 system_pods.go:61] "csi-hostpathplugin-g7zc7" [8d43afcc-7206-4031-897b-e27c738195ad] Running
	I0421 18:26:06.389747   12353 system_pods.go:61] "etcd-addons-337450" [d5b644a4-db2a-419c-8757-3ffc986caf95] Running
	I0421 18:26:06.389750   12353 system_pods.go:61] "kube-apiserver-addons-337450" [28de43a5-aabc-40ec-8311-778c57b6bb55] Running
	I0421 18:26:06.389754   12353 system_pods.go:61] "kube-controller-manager-addons-337450" [35e6ad95-2f09-47df-899d-06797c770946] Running
	I0421 18:26:06.389757   12353 system_pods.go:61] "kube-ingress-dns-minikube" [ebf19058-ca7a-4a46-8ce6-71aaac949202] Running
	I0421 18:26:06.389760   12353 system_pods.go:61] "kube-proxy-n76l5" [8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b] Running
	I0421 18:26:06.389763   12353 system_pods.go:61] "kube-scheduler-addons-337450" [171aeef7-e173-4942-b3d5-24070e00a658] Running
	I0421 18:26:06.389771   12353 system_pods.go:61] "metrics-server-c59844bb4-dkrx4" [6b506806-a7ad-4fa2-95ec-c1698f2f93e4] Running
	I0421 18:26:06.389774   12353 system_pods.go:61] "nvidia-device-plugin-daemonset-hggr8" [ab89f680-78cb-478b-929f-acea30c6e4c8] Running
	I0421 18:26:06.389781   12353 system_pods.go:61] "registry-hqdlr" [5295efd0-2d0b-45a9-92f4-12ac59b9f395] Running
	I0421 18:26:06.389784   12353 system_pods.go:61] "registry-proxy-psfhr" [29887109-7168-4513-91b6-e2f7615b03d0] Running
	I0421 18:26:06.389790   12353 system_pods.go:61] "snapshot-controller-745499f584-5plq8" [ba50b3a1-01aa-496b-9a48-e448c9325502] Running
	I0421 18:26:06.389794   12353 system_pods.go:61] "snapshot-controller-745499f584-wdfhr" [36de1d83-5283-4c07-ae6c-fbc01ccfe12d] Running
	I0421 18:26:06.389800   12353 system_pods.go:61] "storage-provisioner" [3eb02dc0-5b10-429a-b88d-90341a248055] Running
	I0421 18:26:06.389804   12353 system_pods.go:61] "tiller-deploy-6677d64bcd-lrdr7" [d0119b9a-443d-45f9-adeb-fc91c36d95a9] Running
	I0421 18:26:06.389812   12353 system_pods.go:74] duration metric: took 11.845372998s to wait for pod list to return data ...
	I0421 18:26:06.389822   12353 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:26:06.393112   12353 default_sa.go:45] found service account: "default"
	I0421 18:26:06.393132   12353 default_sa.go:55] duration metric: took 3.301985ms for default service account to be created ...
	I0421 18:26:06.393140   12353 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:26:06.402775   12353 system_pods.go:86] 18 kube-system pods found
	I0421 18:26:06.402800   12353 system_pods.go:89] "coredns-7db6d8ff4d-zkbzm" [404bcd18-a121-4e5f-8df6-8caccd78cec0] Running
	I0421 18:26:06.402806   12353 system_pods.go:89] "csi-hostpath-attacher-0" [861f5920-82bc-4203-aca8-d4d87a7fcf8d] Running
	I0421 18:26:06.402812   12353 system_pods.go:89] "csi-hostpath-resizer-0" [999d845e-8fac-4e5b-88d6-e2606bbb46ef] Running
	I0421 18:26:06.402819   12353 system_pods.go:89] "csi-hostpathplugin-g7zc7" [8d43afcc-7206-4031-897b-e27c738195ad] Running
	I0421 18:26:06.402828   12353 system_pods.go:89] "etcd-addons-337450" [d5b644a4-db2a-419c-8757-3ffc986caf95] Running
	I0421 18:26:06.402837   12353 system_pods.go:89] "kube-apiserver-addons-337450" [28de43a5-aabc-40ec-8311-778c57b6bb55] Running
	I0421 18:26:06.402845   12353 system_pods.go:89] "kube-controller-manager-addons-337450" [35e6ad95-2f09-47df-899d-06797c770946] Running
	I0421 18:26:06.402855   12353 system_pods.go:89] "kube-ingress-dns-minikube" [ebf19058-ca7a-4a46-8ce6-71aaac949202] Running
	I0421 18:26:06.402864   12353 system_pods.go:89] "kube-proxy-n76l5" [8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b] Running
	I0421 18:26:06.402868   12353 system_pods.go:89] "kube-scheduler-addons-337450" [171aeef7-e173-4942-b3d5-24070e00a658] Running
	I0421 18:26:06.402872   12353 system_pods.go:89] "metrics-server-c59844bb4-dkrx4" [6b506806-a7ad-4fa2-95ec-c1698f2f93e4] Running
	I0421 18:26:06.402879   12353 system_pods.go:89] "nvidia-device-plugin-daemonset-hggr8" [ab89f680-78cb-478b-929f-acea30c6e4c8] Running
	I0421 18:26:06.402884   12353 system_pods.go:89] "registry-hqdlr" [5295efd0-2d0b-45a9-92f4-12ac59b9f395] Running
	I0421 18:26:06.402890   12353 system_pods.go:89] "registry-proxy-psfhr" [29887109-7168-4513-91b6-e2f7615b03d0] Running
	I0421 18:26:06.402894   12353 system_pods.go:89] "snapshot-controller-745499f584-5plq8" [ba50b3a1-01aa-496b-9a48-e448c9325502] Running
	I0421 18:26:06.402901   12353 system_pods.go:89] "snapshot-controller-745499f584-wdfhr" [36de1d83-5283-4c07-ae6c-fbc01ccfe12d] Running
	I0421 18:26:06.402905   12353 system_pods.go:89] "storage-provisioner" [3eb02dc0-5b10-429a-b88d-90341a248055] Running
	I0421 18:26:06.402910   12353 system_pods.go:89] "tiller-deploy-6677d64bcd-lrdr7" [d0119b9a-443d-45f9-adeb-fc91c36d95a9] Running
	I0421 18:26:06.402917   12353 system_pods.go:126] duration metric: took 9.768642ms to wait for k8s-apps to be running ...
	I0421 18:26:06.402929   12353 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:26:06.403008   12353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:26:06.421690   12353 system_svc.go:56] duration metric: took 18.752011ms WaitForService to wait for kubelet
	I0421 18:26:06.421728   12353 kubeadm.go:576] duration metric: took 2m32.995908158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:26:06.421752   12353 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:26:06.425288   12353 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:26:06.425316   12353 node_conditions.go:123] node cpu capacity is 2
	I0421 18:26:06.425327   12353 node_conditions.go:105] duration metric: took 3.571194ms to run NodePressure ...
	I0421 18:26:06.425339   12353 start.go:240] waiting for startup goroutines ...
	I0421 18:26:06.640165   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:07.137588   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:07.638770   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:08.138202   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:08.640874   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:09.138418   12353 kapi.go:107] duration metric: took 2m22.504203249s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0421 18:26:09.140513   12353 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-337450 cluster.
	I0421 18:26:09.141962   12353 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0421 18:26:09.143269   12353 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0421 18:26:09.144552   12353 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, helm-tiller, yakd, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0421 18:26:09.146262   12353 addons.go:505] duration metric: took 2m35.720409836s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner helm-tiller yakd inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0421 18:26:09.146298   12353 start.go:245] waiting for cluster config update ...
	I0421 18:26:09.146315   12353 start.go:254] writing updated cluster config ...
	I0421 18:26:09.146535   12353 ssh_runner.go:195] Run: rm -f paused
	I0421 18:26:09.195895   12353 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 18:26:09.197832   12353 out.go:177] * Done! kubectl is now configured to use "addons-337450" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.622765240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713724143622734530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1101acc-dc6a-413e-8dc5-ded9a5b157e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.623666618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2a79ffd-51e2-4f18-a357-d3d2b19c7e9a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.623765545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2a79ffd-51e2-4f18-a357-d3d2b19c7e9a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.624213338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f6fed9d955b44c657585d3a33fb07d7eb52c94125560c26a53cef5e0ee2224a,PodSandboxId:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713724137467979868,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,},Annotations:map[string]string{io.kubernetes.container.hash: 70e5b980,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15897eb42c44b9182dfef4488d4ce637b27389da034a66377b768f84702bdca9,PodSandboxId:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713724052153819479,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8b061571,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0d75315e973d8dc52e4249479d5ff2651f07018d9edf37aa5815e9a26b9dd,PodSandboxId:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713723997649613101,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,},Annotations:map[string]string{io.kubernetes.container.hash: 3e3bb7cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855,PodSandboxId:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713723968449303591,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,},Annotations:map[string]string{io.kubernetes.container.hash: fe25901b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b676bc962d722661287583dd53dbf86bf7f708ef674ef75b3e07a4ece7671d2,PodSandboxId:f6c89dc4f97b2cb1f8876d1ab1ae54643d421098a179673b56b6fe4a659da0ad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713723890635398159,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zfl6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db264070-f639-45dc-b205-3d286eb77287,},Annotations:map[string]string{io.kubernetes.container.hash: 488196bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e45bc5e2c41d699bfe359cb752d6a3d3e0aab4c5931d681dff0ebc6e407022,PodSandboxId:80f93801183102b8f19cbcc0c90b1c58a61415ee49065544d0366d0ce3bb8c2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713723890486312789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s28gd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 802473e7-7d92-455d-8504-8b944f605d82,},Annotations:map[string]string{io.kubernetes.container.hash: a04f156b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9152bdb83e6576a02ae494f93cafb672fa6dfdcb98993cd17c8447af32ba3921,PodSandboxId:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713723877172194170,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,},Annotations:map[string]string{io.kubernetes.container.hash: 2f356fec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8,PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713723864116013774,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a08a4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e,PodSandboxId:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713723821362904373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{io.kubernetes.container.hash: 3042b58b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab,PodSandboxId:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713723817664384292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c2e450a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1,PodSandboxId:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713723815765116305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,},Annotations:map[string]string{io.kubernetes.container.hash: 2df30c78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8bec0fec02da70fa1676e482ac157
3c80108abe4c474e0710b35040392b1f4,PodSandboxId:cd1ab8d4ba0849c960eaf88ccc05cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713723794165220507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed8845
38116c366ddb7ead,PodSandboxId:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa45129f2cd2394bcb359632,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713723794202839069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,},Annotations:map[string]string{io.kubernetes.container.hash: b0239208,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8,PodSandboxId:e3713418f65a8d392
a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713723794178998563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16e1b455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af,PodSandboxId:3717dfc15b103210c35bb167a79f573b30
7d95ca27843ae3f4365e78c71257f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713723794098174915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2a79ffd-51e2-4f18-a357-d3d2b19c7e9a name=/runtime.v1.RuntimeServic
e/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.666964564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a83b8bca-289d-4d63-89a2-99b2840c5f7e name=/runtime.v1.RuntimeService/Version
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.667034784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a83b8bca-289d-4d63-89a2-99b2840c5f7e name=/runtime.v1.RuntimeService/Version
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.668211470Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11aa8137-f0d5-416c-86b1-9de96dd3e356 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.669872952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713724143669847669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11aa8137-f0d5-416c-86b1-9de96dd3e356 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.670842832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ea4828d-0484-4cc1-b0a5-3763959e0a63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.670896262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ea4828d-0484-4cc1-b0a5-3763959e0a63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.671253973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f6fed9d955b44c657585d3a33fb07d7eb52c94125560c26a53cef5e0ee2224a,PodSandboxId:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713724137467979868,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,},Annotations:map[string]string{io.kubernetes.container.hash: 70e5b980,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15897eb42c44b9182dfef4488d4ce637b27389da034a66377b768f84702bdca9,PodSandboxId:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713724052153819479,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8b061571,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0d75315e973d8dc52e4249479d5ff2651f07018d9edf37aa5815e9a26b9dd,PodSandboxId:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713723997649613101,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,},Annotations:map[string]string{io.kubernetes.container.hash: 3e3bb7cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855,PodSandboxId:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713723968449303591,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,},Annotations:map[string]string{io.kubernetes.container.hash: fe25901b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b676bc962d722661287583dd53dbf86bf7f708ef674ef75b3e07a4ece7671d2,PodSandboxId:f6c89dc4f97b2cb1f8876d1ab1ae54643d421098a179673b56b6fe4a659da0ad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713723890635398159,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zfl6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db264070-f639-45dc-b205-3d286eb77287,},Annotations:map[string]string{io.kubernetes.container.hash: 488196bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e45bc5e2c41d699bfe359cb752d6a3d3e0aab4c5931d681dff0ebc6e407022,PodSandboxId:80f93801183102b8f19cbcc0c90b1c58a61415ee49065544d0366d0ce3bb8c2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713723890486312789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s28gd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 802473e7-7d92-455d-8504-8b944f605d82,},Annotations:map[string]string{io.kubernetes.container.hash: a04f156b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9152bdb83e6576a02ae494f93cafb672fa6dfdcb98993cd17c8447af32ba3921,PodSandboxId:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713723877172194170,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,},Annotations:map[string]string{io.kubernetes.container.hash: 2f356fec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8,PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713723864116013774,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a08a4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e,PodSandboxId:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713723821362904373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{io.kubernetes.container.hash: 3042b58b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab,PodSandboxId:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713723817664384292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c2e450a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1,PodSandboxId:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713723815765116305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,},Annotations:map[string]string{io.kubernetes.container.hash: 2df30c78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8bec0fec02da70fa1676e482ac157
3c80108abe4c474e0710b35040392b1f4,PodSandboxId:cd1ab8d4ba0849c960eaf88ccc05cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713723794165220507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed8845
38116c366ddb7ead,PodSandboxId:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa45129f2cd2394bcb359632,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713723794202839069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,},Annotations:map[string]string{io.kubernetes.container.hash: b0239208,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8,PodSandboxId:e3713418f65a8d392
a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713723794178998563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16e1b455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af,PodSandboxId:3717dfc15b103210c35bb167a79f573b30
7d95ca27843ae3f4365e78c71257f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713723794098174915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ea4828d-0484-4cc1-b0a5-3763959e0a63 name=/runtime.v1.RuntimeServic
e/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.709956278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e035e43-f86d-45ba-9a39-5fd9871b05fb name=/runtime.v1.RuntimeService/Version
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.710033976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e035e43-f86d-45ba-9a39-5fd9871b05fb name=/runtime.v1.RuntimeService/Version
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.711755097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7fc91515-0fd9-4f3c-b77a-3390034a1334 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.713216609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713724143713187274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7fc91515-0fd9-4f3c-b77a-3390034a1334 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.713943447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edbaa36b-cda4-4aec-8b4f-40406cdf7a24 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.714033969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edbaa36b-cda4-4aec-8b4f-40406cdf7a24 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.714399901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f6fed9d955b44c657585d3a33fb07d7eb52c94125560c26a53cef5e0ee2224a,PodSandboxId:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713724137467979868,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,},Annotations:map[string]string{io.kubernetes.container.hash: 70e5b980,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15897eb42c44b9182dfef4488d4ce637b27389da034a66377b768f84702bdca9,PodSandboxId:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713724052153819479,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8b061571,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0d75315e973d8dc52e4249479d5ff2651f07018d9edf37aa5815e9a26b9dd,PodSandboxId:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713723997649613101,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,},Annotations:map[string]string{io.kubernetes.container.hash: 3e3bb7cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855,PodSandboxId:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713723968449303591,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,},Annotations:map[string]string{io.kubernetes.container.hash: fe25901b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b676bc962d722661287583dd53dbf86bf7f708ef674ef75b3e07a4ece7671d2,PodSandboxId:f6c89dc4f97b2cb1f8876d1ab1ae54643d421098a179673b56b6fe4a659da0ad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713723890635398159,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zfl6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db264070-f639-45dc-b205-3d286eb77287,},Annotations:map[string]string{io.kubernetes.container.hash: 488196bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e45bc5e2c41d699bfe359cb752d6a3d3e0aab4c5931d681dff0ebc6e407022,PodSandboxId:80f93801183102b8f19cbcc0c90b1c58a61415ee49065544d0366d0ce3bb8c2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713723890486312789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s28gd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 802473e7-7d92-455d-8504-8b944f605d82,},Annotations:map[string]string{io.kubernetes.container.hash: a04f156b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9152bdb83e6576a02ae494f93cafb672fa6dfdcb98993cd17c8447af32ba3921,PodSandboxId:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713723877172194170,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,},Annotations:map[string]string{io.kubernetes.container.hash: 2f356fec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8,PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713723864116013774,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a08a4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e,PodSandboxId:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713723821362904373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{io.kubernetes.container.hash: 3042b58b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab,PodSandboxId:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713723817664384292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c2e450a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1,PodSandboxId:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713723815765116305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,},Annotations:map[string]string{io.kubernetes.container.hash: 2df30c78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8bec0fec02da70fa1676e482ac157
3c80108abe4c474e0710b35040392b1f4,PodSandboxId:cd1ab8d4ba0849c960eaf88ccc05cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713723794165220507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed8845
38116c366ddb7ead,PodSandboxId:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa45129f2cd2394bcb359632,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713723794202839069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,},Annotations:map[string]string{io.kubernetes.container.hash: b0239208,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8,PodSandboxId:e3713418f65a8d392
a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713723794178998563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16e1b455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af,PodSandboxId:3717dfc15b103210c35bb167a79f573b30
7d95ca27843ae3f4365e78c71257f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713723794098174915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edbaa36b-cda4-4aec-8b4f-40406cdf7a24 name=/runtime.v1.RuntimeServic
e/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.761659278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9660bf02-ef2d-46b2-90aa-0b61d69d505c name=/runtime.v1.RuntimeService/Version
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.761774397Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9660bf02-ef2d-46b2-90aa-0b61d69d505c name=/runtime.v1.RuntimeService/Version
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.763710198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96174597-cad8-4e5f-8aa5-1c72e7f99697 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.765003825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713724143764976475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96174597-cad8-4e5f-8aa5-1c72e7f99697 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.765691734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2a16ac4-8912-46cb-a347-5854e6c4ac84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.765773472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2a16ac4-8912-46cb-a347-5854e6c4ac84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:29:03 addons-337450 crio[681]: time="2024-04-21 18:29:03.766133616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f6fed9d955b44c657585d3a33fb07d7eb52c94125560c26a53cef5e0ee2224a,PodSandboxId:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713724137467979868,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,},Annotations:map[string]string{io.kubernetes.container.hash: 70e5b980,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15897eb42c44b9182dfef4488d4ce637b27389da034a66377b768f84702bdca9,PodSandboxId:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713724052153819479,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8b061571,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0d75315e973d8dc52e4249479d5ff2651f07018d9edf37aa5815e9a26b9dd,PodSandboxId:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713723997649613101,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,},Annotations:map[string]string{io.kubernetes.container.hash: 3e3bb7cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855,PodSandboxId:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713723968449303591,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,},Annotations:map[string]string{io.kubernetes.container.hash: fe25901b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b676bc962d722661287583dd53dbf86bf7f708ef674ef75b3e07a4ece7671d2,PodSandboxId:f6c89dc4f97b2cb1f8876d1ab1ae54643d421098a179673b56b6fe4a659da0ad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713723890635398159,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2zfl6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db264070-f639-45dc-b205-3d286eb77287,},Annotations:map[string]string{io.kubernetes.container.hash: 488196bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e45bc5e2c41d699bfe359cb752d6a3d3e0aab4c5931d681dff0ebc6e407022,PodSandboxId:80f93801183102b8f19cbcc0c90b1c58a61415ee49065544d0366d0ce3bb8c2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713723890486312789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s28gd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 802473e7-7d92-455d-8504-8b944f605d82,},Annotations:map[string]string{io.kubernetes.container.hash: a04f156b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9152bdb83e6576a02ae494f93cafb672fa6dfdcb98993cd17c8447af32ba3921,PodSandboxId:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713723877172194170,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,},Annotations:map[string]string{io.kubernetes.container.hash: 2f356fec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8,PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172
e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713723864116013774,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a08a4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e,PodSandboxId:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713723821362904373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{io.kubernetes.container.hash: 3042b58b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab,PodSandboxId:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713723817664384292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c2e450a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1,PodSandboxId:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713723815765116305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,},Annotations:map[string]string{io.kubernetes.container.hash: 2df30c78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8bec0fec02da70fa1676e482ac157
3c80108abe4c474e0710b35040392b1f4,PodSandboxId:cd1ab8d4ba0849c960eaf88ccc05cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713723794165220507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed8845
38116c366ddb7ead,PodSandboxId:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa45129f2cd2394bcb359632,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713723794202839069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,},Annotations:map[string]string{io.kubernetes.container.hash: b0239208,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8,PodSandboxId:e3713418f65a8d392
a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713723794178998563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16e1b455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af,PodSandboxId:3717dfc15b103210c35bb167a79f573b30
7d95ca27843ae3f4365e78c71257f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713723794098174915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2a16ac4-8912-46cb-a347-5854e6c4ac84 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4f6fed9d955b4       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago        Running             hello-world-app           0                   992250a4907ef       hello-world-app-86c47465fc-4hk7z
	15897eb42c44b       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        About a minute ago   Running             headlamp                  0                   ab21146143292       headlamp-7559bf459f-h8lsl
	0ea0d75315e97       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                              2 minutes ago        Running             nginx                     0                   7482face30761       nginx
	e45a0527f2fd3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago        Running             gcp-auth                  0                   cd3b55e259922       gcp-auth-5db96cd9b4-czh85
	9b676bc962d72       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago        Exited              patch                     0                   f6c89dc4f97b2       ingress-nginx-admission-patch-2zfl6
	92e45bc5e2c41       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago        Exited              create                    0                   80f9380118310       ingress-nginx-admission-create-s28gd
	9152bdb83e657       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago        Running             yakd                      0                   cd701ef748b0b       yakd-dashboard-5ddbf7d777-drwst
	3f4bf41743289       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago        Running             metrics-server            0                   3b9d139430d38       metrics-server-c59844bb4-dkrx4
	e5799cfbf50ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago        Running             storage-provisioner       0                   c9d65a814e8f2       storage-provisioner
	5311f7249669f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago        Running             coredns                   0                   2e9831c6cc61c       coredns-7db6d8ff4d-zkbzm
	7be7f865cd8c6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                             5 minutes ago        Running             kube-proxy                0                   e7391d8408602       kube-proxy-n76l5
	dcecdd0d880a4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago        Running             etcd                      0                   c188722987ccb       etcd-addons-337450
	fd969e1dcdde1       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                             5 minutes ago        Running             kube-apiserver            0                   e3713418f65a8       kube-apiserver-addons-337450
	eb8bec0fec02d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                             5 minutes ago        Running             kube-scheduler            0                   cd1ab8d4ba084       kube-scheduler-addons-337450
	78ac86de1b52b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                             5 minutes ago        Running             kube-controller-manager   0                   3717dfc15b103       kube-controller-manager-addons-337450
	
	
	==> coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] <==
	[INFO] 10.244.0.7:52815 - 34853 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000623985s
	[INFO] 10.244.0.7:47683 - 41282 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112816s
	[INFO] 10.244.0.7:47683 - 56959 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083333s
	[INFO] 10.244.0.7:50161 - 31101 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106992s
	[INFO] 10.244.0.7:50161 - 16767 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074185s
	[INFO] 10.244.0.7:46753 - 3610 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190288s
	[INFO] 10.244.0.7:46753 - 62747 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092232s
	[INFO] 10.244.0.7:47155 - 48990 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097033s
	[INFO] 10.244.0.7:47155 - 25437 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000029976s
	[INFO] 10.244.0.7:51856 - 64564 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062481s
	[INFO] 10.244.0.7:51856 - 22838 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00002496s
	[INFO] 10.244.0.7:53123 - 2209 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064258s
	[INFO] 10.244.0.7:53123 - 19111 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057529s
	[INFO] 10.244.0.7:52319 - 9503 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000060552s
	[INFO] 10.244.0.7:52319 - 13853 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068872s
	[INFO] 10.244.0.22:50788 - 1817 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000542062s
	[INFO] 10.244.0.22:44024 - 56028 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000088453s
	[INFO] 10.244.0.22:43947 - 23924 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000205108s
	[INFO] 10.244.0.22:56531 - 1441 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128694s
	[INFO] 10.244.0.22:59733 - 39783 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180424s
	[INFO] 10.244.0.22:34618 - 63315 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000439928s
	[INFO] 10.244.0.22:33992 - 49000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003531463s
	[INFO] 10.244.0.22:50067 - 6563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.004789966s
	[INFO] 10.244.0.25:38731 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00078675s
	[INFO] 10.244.0.25:56327 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.001679876s
	
	
	==> describe nodes <==
	Name:               addons-337450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-337450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=addons-337450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_23_20_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-337450
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:23:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-337450
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:28:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:27:55 +0000   Sun, 21 Apr 2024 18:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:27:55 +0000   Sun, 21 Apr 2024 18:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:27:55 +0000   Sun, 21 Apr 2024 18:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:27:55 +0000   Sun, 21 Apr 2024 18:23:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    addons-337450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 56f6a6625bb5472dac6b0ad116cf083d
	  System UUID:                56f6a662-5bb5-472d-ac6b-0ad116cf083d
	  Boot ID:                    70c56614-471a-4691-904a-240bf9e45d25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-4hk7z         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-czh85                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  headlamp                    headlamp-7559bf459f-h8lsl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 coredns-7db6d8ff4d-zkbzm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m31s
	  kube-system                 etcd-addons-337450                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-apiserver-addons-337450             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-controller-manager-addons-337450    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-proxy-n76l5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-scheduler-addons-337450             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 metrics-server-c59844bb4-dkrx4           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m25s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-drwst          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m51s (x8 over 5m51s)  kubelet          Node addons-337450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x8 over 5m51s)  kubelet          Node addons-337450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x7 over 5m51s)  kubelet          Node addons-337450 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s                  kubelet          Node addons-337450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m45s                  kubelet          Node addons-337450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s                  kubelet          Node addons-337450 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m43s                  kubelet          Node addons-337450 status is now: NodeReady
	  Normal  RegisteredNode           5m32s                  node-controller  Node addons-337450 event: Registered Node addons-337450 in Controller
	
	
	==> dmesg <==
	[  +5.097446] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.890642] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.223425] kauditd_printk_skb: 92 callbacks suppressed
	[Apr21 18:24] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.028609] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.001309] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.843631] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.314068] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.008195] kauditd_printk_skb: 52 callbacks suppressed
	[Apr21 18:25] kauditd_printk_skb: 49 callbacks suppressed
	[ +28.608990] kauditd_printk_skb: 24 callbacks suppressed
	[ +26.836037] kauditd_printk_skb: 24 callbacks suppressed
	[Apr21 18:26] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.553523] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.851696] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.648706] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.226919] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.168718] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.088225] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.064923] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.075488] kauditd_printk_skb: 53 callbacks suppressed
	[Apr21 18:27] kauditd_printk_skb: 3 callbacks suppressed
	[ +11.576735] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.859091] kauditd_printk_skb: 24 callbacks suppressed
	[Apr21 18:28] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] <==
	{"level":"info","ts":"2024-04-21T18:26:20.734066Z","caller":"traceutil/trace.go:171","msg":"trace[2027573916] linearizableReadLoop","detail":"{readStateIndex:1419; appliedIndex:1418; }","duration":"348.840082ms","start":"2024-04-21T18:26:20.38521Z","end":"2024-04-21T18:26:20.73405Z","steps":["trace[2027573916] 'read index received'  (duration: 348.729763ms)","trace[2027573916] 'applied index is now lower than readState.Index'  (duration: 109.764µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T18:26:20.734288Z","caller":"traceutil/trace.go:171","msg":"trace[310206028] transaction","detail":"{read_only:false; response_revision:1365; number_of_response:1; }","duration":"394.719167ms","start":"2024-04-21T18:26:20.33956Z","end":"2024-04-21T18:26:20.734279Z","steps":["trace[310206028] 'process raft request'  (duration: 394.414437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:20.734407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:20.339543Z","time spent":"394.799936ms","remote":"127.0.0.1:48940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1672,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1628 >> failure:<>"}
	{"level":"warn","ts":"2024-04-21T18:26:20.734786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.550665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T18:26:20.734843Z","caller":"traceutil/trace.go:171","msg":"trace[1589045702] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1365; }","duration":"349.629142ms","start":"2024-04-21T18:26:20.385206Z","end":"2024-04-21T18:26:20.734835Z","steps":["trace[1589045702] 'agreement among raft nodes before linearized reading'  (duration: 349.482943ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:20.734929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:20.385172Z","time spent":"349.747063ms","remote":"127.0.0.1:49132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	{"level":"warn","ts":"2024-04-21T18:26:20.735112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.701267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8120"}
	{"level":"info","ts":"2024-04-21T18:26:20.73516Z","caller":"traceutil/trace.go:171","msg":"trace[231272520] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1365; }","duration":"183.771048ms","start":"2024-04-21T18:26:20.551383Z","end":"2024-04-21T18:26:20.735154Z","steps":["trace[231272520] 'agreement among raft nodes before linearized reading'  (duration: 183.658907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:20.738272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.175196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-04-21T18:26:20.738341Z","caller":"traceutil/trace.go:171","msg":"trace[1755853234] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1365; }","duration":"157.115224ms","start":"2024-04-21T18:26:20.581215Z","end":"2024-04-21T18:26:20.73833Z","steps":["trace[1755853234] 'agreement among raft nodes before linearized reading'  (duration: 154.148728ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:26:35.137036Z","caller":"traceutil/trace.go:171","msg":"trace[1534935221] linearizableReadLoop","detail":"{readStateIndex:1559; appliedIndex:1558; }","duration":"303.101159ms","start":"2024-04-21T18:26:34.833917Z","end":"2024-04-21T18:26:35.137018Z","steps":["trace[1534935221] 'read index received'  (duration: 302.941352ms)","trace[1534935221] 'applied index is now lower than readState.Index'  (duration: 159.218µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T18:26:35.137271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.337088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-84df5799c-hc5t9.17c85ede0e133f63\" ","response":"range_response_count:1 size:794"}
	{"level":"info","ts":"2024-04-21T18:26:35.137334Z","caller":"traceutil/trace.go:171","msg":"trace[686504192] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-controller-84df5799c-hc5t9.17c85ede0e133f63; range_end:; response_count:1; response_revision:1500; }","duration":"303.429976ms","start":"2024-04-21T18:26:34.833892Z","end":"2024-04-21T18:26:35.137322Z","steps":["trace[686504192] 'agreement among raft nodes before linearized reading'  (duration: 303.279689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:35.137493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:34.833879Z","time spent":"303.525957ms","remote":"127.0.0.1:48830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":1,"response size":818,"request content":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-84df5799c-hc5t9.17c85ede0e133f63\" "}
	{"level":"info","ts":"2024-04-21T18:26:35.137338Z","caller":"traceutil/trace.go:171","msg":"trace[1061000865] transaction","detail":"{read_only:false; response_revision:1500; number_of_response:1; }","duration":"326.553827ms","start":"2024-04-21T18:26:34.810777Z","end":"2024-04-21T18:26:35.137331Z","steps":["trace[1061000865] 'process raft request'  (duration: 326.124986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:35.137718Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:34.81076Z","time spent":"326.916653ms","remote":"127.0.0.1:49028","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-337450\" mod_revision:1380 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-337450\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-337450\" > >"}
	{"level":"warn","ts":"2024-04-21T18:26:35.137304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.252592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6042"}
	{"level":"info","ts":"2024-04-21T18:26:35.138212Z","caller":"traceutil/trace.go:171","msg":"trace[909335110] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1500; }","duration":"186.186523ms","start":"2024-04-21T18:26:34.952016Z","end":"2024-04-21T18:26:35.138203Z","steps":["trace[909335110] 'agreement among raft nodes before linearized reading'  (duration: 185.222257ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:28:03.009195Z","caller":"traceutil/trace.go:171","msg":"trace[647246773] linearizableReadLoop","detail":"{readStateIndex:1995; appliedIndex:1994; }","duration":"237.163431ms","start":"2024-04-21T18:28:02.771992Z","end":"2024-04-21T18:28:03.009155Z","steps":["trace[647246773] 'read index received'  (duration: 237.003836ms)","trace[647246773] 'applied index is now lower than readState.Index'  (duration: 159.036µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T18:28:03.009543Z","caller":"traceutil/trace.go:171","msg":"trace[984561989] transaction","detail":"{read_only:false; response_revision:1913; number_of_response:1; }","duration":"311.528435ms","start":"2024-04-21T18:28:02.698002Z","end":"2024-04-21T18:28:03.00953Z","steps":["trace[984561989] 'process raft request'  (duration: 311.033507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:28:03.009687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.623749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-04-21T18:28:03.009714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:28:02.697985Z","time spent":"311.626258ms","remote":"127.0.0.1:48936","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1912 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-21T18:28:03.00974Z","caller":"traceutil/trace.go:171","msg":"trace[140255945] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1913; }","duration":"237.790266ms","start":"2024-04-21T18:28:02.771941Z","end":"2024-04-21T18:28:03.009731Z","steps":["trace[140255945] 'agreement among raft nodes before linearized reading'  (duration: 237.622002ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:28:03.009924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.405556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-21T18:28:03.009973Z","caller":"traceutil/trace.go:171","msg":"trace[282287777] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1913; }","duration":"186.484556ms","start":"2024-04-21T18:28:02.823482Z","end":"2024-04-21T18:28:03.009966Z","steps":["trace[282287777] 'agreement among raft nodes before linearized reading'  (duration: 186.422985ms)"],"step_count":1}
	
	
	==> gcp-auth [e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855] <==
	2024/04/21 18:26:14 Ready to write response ...
	2024/04/21 18:26:15 Ready to marshal response ...
	2024/04/21 18:26:15 Ready to write response ...
	2024/04/21 18:26:20 Ready to marshal response ...
	2024/04/21 18:26:20 Ready to write response ...
	2024/04/21 18:26:28 Ready to marshal response ...
	2024/04/21 18:26:28 Ready to write response ...
	2024/04/21 18:26:32 Ready to marshal response ...
	2024/04/21 18:26:32 Ready to write response ...
	2024/04/21 18:26:43 Ready to marshal response ...
	2024/04/21 18:26:43 Ready to write response ...
	2024/04/21 18:26:43 Ready to marshal response ...
	2024/04/21 18:26:43 Ready to write response ...
	2024/04/21 18:26:49 Ready to marshal response ...
	2024/04/21 18:26:49 Ready to write response ...
	2024/04/21 18:26:56 Ready to marshal response ...
	2024/04/21 18:26:56 Ready to write response ...
	2024/04/21 18:27:24 Ready to marshal response ...
	2024/04/21 18:27:24 Ready to write response ...
	2024/04/21 18:27:24 Ready to marshal response ...
	2024/04/21 18:27:24 Ready to write response ...
	2024/04/21 18:27:24 Ready to marshal response ...
	2024/04/21 18:27:24 Ready to write response ...
	2024/04/21 18:28:53 Ready to marshal response ...
	2024/04/21 18:28:53 Ready to write response ...
	
	
	==> kernel <==
	 18:29:04 up 6 min,  0 users,  load average: 0.68, 1.32, 0.72
	Linux addons-337450 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] <==
	I0421 18:25:30.188862       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0421 18:26:28.662067       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0421 18:26:32.759690       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0421 18:26:32.944739       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.64.189"}
	I0421 18:26:38.223932       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0421 18:26:39.253580       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0421 18:26:59.680395       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0421 18:27:06.398994       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.399041       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.422514       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.422628       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.444547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.444613       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.452507       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.452565       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.480191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.483544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0421 18:27:07.452774       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0421 18:27:07.484367       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0421 18:27:07.490818       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0421 18:27:12.247113       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0421 18:27:24.718994       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.171.223"}
	I0421 18:28:53.781380       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.230.173"}
	E0421 18:28:56.026341       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0421 18:28:58.788693       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] <==
	W0421 18:27:50.652082       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:27:50.652183       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:27:51.616591       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:27:51.616644       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:28:00.662566       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:28:00.662692       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:28:13.993543       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:28:13.993658       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:28:30.409227       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:28:30.409393       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:28:37.650539       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:28:37.650641       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0421 18:28:53.648925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="46.147232ms"
	I0421 18:28:53.669248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="19.663791ms"
	I0421 18:28:53.669345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="46.027µs"
	I0421 18:28:53.675506       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.521µs"
	W0421 18:28:54.726612       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:28:54.726641       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0421 18:28:55.770297       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0421 18:28:55.776530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="9.081µs"
	I0421 18:28:55.784231       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0421 18:28:56.398880       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:28:56.399001       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0421 18:28:57.946848       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="18.262725ms"
	I0421 18:28:57.947012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="53.37µs"
	
	
	==> kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] <==
	I0421 18:23:36.981912       1 server_linux.go:69] "Using iptables proxy"
	I0421 18:23:37.069313       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.51"]
	I0421 18:23:37.195642       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:23:37.195739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:23:37.195757       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:23:37.199921       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:23:37.200100       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:23:37.200137       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:23:37.201402       1 config.go:192] "Starting service config controller"
	I0421 18:23:37.201519       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:23:37.201539       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:23:37.201543       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:23:37.202146       1 config.go:319] "Starting node config controller"
	I0421 18:23:37.202153       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 18:23:37.302545       1 shared_informer.go:320] Caches are synced for node config
	I0421 18:23:37.302596       1 shared_informer.go:320] Caches are synced for service config
	I0421 18:23:37.302624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] <==
	W0421 18:23:17.036246       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 18:23:17.036285       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 18:23:17.869765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 18:23:17.869826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 18:23:17.902635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:17.902758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:17.952783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 18:23:17.952859       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 18:23:17.957021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:17.957081       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:17.962255       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 18:23:17.962305       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 18:23:17.979957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 18:23:17.980006       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 18:23:18.040221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 18:23:18.040281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 18:23:18.074679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:18.074740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:18.223099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:18.223136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:18.254665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 18:23:18.254721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 18:23:18.293021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 18:23:18.293158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0421 18:23:19.726980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 18:28:19 addons-337450 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:28:53 addons-337450 kubelet[1271]: I0421 18:28:53.656180    1271 topology_manager.go:215] "Topology Admit Handler" podUID="68d248d5-3d1e-4c96-89c8-b2099198c47b" podNamespace="default" podName="hello-world-app-86c47465fc-4hk7z"
	Apr 21 18:28:53 addons-337450 kubelet[1271]: E0421 18:28:53.656846    1271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46e8267d-6c69-4a40-9171-46eacf5eb061" containerName="local-path-provisioner"
	Apr 21 18:28:53 addons-337450 kubelet[1271]: I0421 18:28:53.656953    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e8267d-6c69-4a40-9171-46eacf5eb061" containerName="local-path-provisioner"
	Apr 21 18:28:53 addons-337450 kubelet[1271]: I0421 18:28:53.688791    1271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/68d248d5-3d1e-4c96-89c8-b2099198c47b-gcp-creds\") pod \"hello-world-app-86c47465fc-4hk7z\" (UID: \"68d248d5-3d1e-4c96-89c8-b2099198c47b\") " pod="default/hello-world-app-86c47465fc-4hk7z"
	Apr 21 18:28:53 addons-337450 kubelet[1271]: I0421 18:28:53.688942    1271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkbqb\" (UniqueName: \"kubernetes.io/projected/68d248d5-3d1e-4c96-89c8-b2099198c47b-kube-api-access-nkbqb\") pod \"hello-world-app-86c47465fc-4hk7z\" (UID: \"68d248d5-3d1e-4c96-89c8-b2099198c47b\") " pod="default/hello-world-app-86c47465fc-4hk7z"
	Apr 21 18:28:54 addons-337450 kubelet[1271]: I0421 18:28:54.798904    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmxv6\" (UniqueName: \"kubernetes.io/projected/ebf19058-ca7a-4a46-8ce6-71aaac949202-kube-api-access-pmxv6\") pod \"ebf19058-ca7a-4a46-8ce6-71aaac949202\" (UID: \"ebf19058-ca7a-4a46-8ce6-71aaac949202\") "
	Apr 21 18:28:54 addons-337450 kubelet[1271]: I0421 18:28:54.801600    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf19058-ca7a-4a46-8ce6-71aaac949202-kube-api-access-pmxv6" (OuterVolumeSpecName: "kube-api-access-pmxv6") pod "ebf19058-ca7a-4a46-8ce6-71aaac949202" (UID: "ebf19058-ca7a-4a46-8ce6-71aaac949202"). InnerVolumeSpecName "kube-api-access-pmxv6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 21 18:28:54 addons-337450 kubelet[1271]: I0421 18:28:54.867660    1271 scope.go:117] "RemoveContainer" containerID="c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260"
	Apr 21 18:28:54 addons-337450 kubelet[1271]: I0421 18:28:54.899393    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pmxv6\" (UniqueName: \"kubernetes.io/projected/ebf19058-ca7a-4a46-8ce6-71aaac949202-kube-api-access-pmxv6\") on node \"addons-337450\" DevicePath \"\""
	Apr 21 18:28:54 addons-337450 kubelet[1271]: I0421 18:28:54.900013    1271 scope.go:117] "RemoveContainer" containerID="c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260"
	Apr 21 18:28:54 addons-337450 kubelet[1271]: E0421 18:28:54.906883    1271 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260\": container with ID starting with c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260 not found: ID does not exist" containerID="c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260"
	Apr 21 18:28:54 addons-337450 kubelet[1271]: I0421 18:28:54.906954    1271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260"} err="failed to get container status \"c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260\": rpc error: code = NotFound desc = could not find container \"c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260\": container with ID starting with c275009775e60f25ebfbfb70f4417a7fbafb3dcb29242a11bd2192f7e2719260 not found: ID does not exist"
	Apr 21 18:28:55 addons-337450 kubelet[1271]: I0421 18:28:55.962043    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802473e7-7d92-455d-8504-8b944f605d82" path="/var/lib/kubelet/pods/802473e7-7d92-455d-8504-8b944f605d82/volumes"
	Apr 21 18:28:55 addons-337450 kubelet[1271]: I0421 18:28:55.963949    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db264070-f639-45dc-b205-3d286eb77287" path="/var/lib/kubelet/pods/db264070-f639-45dc-b205-3d286eb77287/volumes"
	Apr 21 18:28:55 addons-337450 kubelet[1271]: I0421 18:28:55.966403    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf19058-ca7a-4a46-8ce6-71aaac949202" path="/var/lib/kubelet/pods/ebf19058-ca7a-4a46-8ce6-71aaac949202/volumes"
	Apr 21 18:28:57 addons-337450 kubelet[1271]: I0421 18:28:57.926802    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-4hk7z" podStartSLOduration=1.7355409179999999 podStartE2EDuration="4.926764971s" podCreationTimestamp="2024-04-21 18:28:53 +0000 UTC" firstStartedPulling="2024-04-21 18:28:54.256922786 +0000 UTC m=+334.493752867" lastFinishedPulling="2024-04-21 18:28:57.448146839 +0000 UTC m=+337.684976920" observedRunningTime="2024-04-21 18:28:57.926578169 +0000 UTC m=+338.163408269" watchObservedRunningTime="2024-04-21 18:28:57.926764971 +0000 UTC m=+338.163595071"
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.142570    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/598d7672-fac0-49eb-9531-28ed2743003c-webhook-cert\") pod \"598d7672-fac0-49eb-9531-28ed2743003c\" (UID: \"598d7672-fac0-49eb-9531-28ed2743003c\") "
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.142632    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmfr6\" (UniqueName: \"kubernetes.io/projected/598d7672-fac0-49eb-9531-28ed2743003c-kube-api-access-qmfr6\") pod \"598d7672-fac0-49eb-9531-28ed2743003c\" (UID: \"598d7672-fac0-49eb-9531-28ed2743003c\") "
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.145923    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598d7672-fac0-49eb-9531-28ed2743003c-kube-api-access-qmfr6" (OuterVolumeSpecName: "kube-api-access-qmfr6") pod "598d7672-fac0-49eb-9531-28ed2743003c" (UID: "598d7672-fac0-49eb-9531-28ed2743003c"). InnerVolumeSpecName "kube-api-access-qmfr6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.147706    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598d7672-fac0-49eb-9531-28ed2743003c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "598d7672-fac0-49eb-9531-28ed2743003c" (UID: "598d7672-fac0-49eb-9531-28ed2743003c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.243359    1271 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/598d7672-fac0-49eb-9531-28ed2743003c-webhook-cert\") on node \"addons-337450\" DevicePath \"\""
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.243403    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qmfr6\" (UniqueName: \"kubernetes.io/projected/598d7672-fac0-49eb-9531-28ed2743003c-kube-api-access-qmfr6\") on node \"addons-337450\" DevicePath \"\""
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.936841    1271 scope.go:117] "RemoveContainer" containerID="89c9751f6703b301cd252b4dd477322bd95653d3badaa3c2df4fb1626dd13db4"
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.944198    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598d7672-fac0-49eb-9531-28ed2743003c" path="/var/lib/kubelet/pods/598d7672-fac0-49eb-9531-28ed2743003c/volumes"
	
	
	==> storage-provisioner [e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e] <==
	I0421 18:23:41.760566       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 18:23:41.784000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 18:23:41.784101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 18:23:41.797856       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 18:23:41.803105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-337450_2c81037f-fda8-484b-be10-f2799d1cde06!
	I0421 18:23:41.804885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dfcb6b5-e135-4ff0-a13c-99e06c620c2e", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-337450_2c81037f-fda8-484b-be10-f2799d1cde06 became leader
	I0421 18:23:41.903542       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-337450_2c81037f-fda8-484b-be10-f2799d1cde06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-337450 -n addons-337450
helpers_test.go:261: (dbg) Run:  kubectl --context addons-337450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (354.24s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 21.137669ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-dkrx4" [6b506806-a7ad-4fa2-95ec-c1698f2f93e4] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00598074s
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (88.256869ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 2m42.322270944s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (73.406565ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 2m44.545369005s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (70.277365ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 2m51.231417307s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (74.362092ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 2m57.317243595s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (65.230457ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 3m2.761086794s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (113.222824ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 3m22.837138129s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (62.437113ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 3m40.809259672s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (74.228737ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 4m10.756085679s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (61.976143ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 4m47.82149983s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (63.495ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 5m52.890924689s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (61.115327ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 6m46.996605927s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (67.601732ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 7m39.19260575s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-337450 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-337450 top pods -n kube-system: exit status 1 (70.735081ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zkbzm, age: 8m27.328963771s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-337450 -n addons-337450
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-337450 logs -n 25: (1.60889412s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| delete  | -p download-only-287232                                                                     | download-only-287232 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| delete  | -p download-only-916770                                                                     | download-only-916770 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| delete  | -p download-only-287232                                                                     | download-only-287232 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-997979 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | binary-mirror-997979                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34105                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-997979                                                                     | binary-mirror-997979 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| addons  | enable dashboard -p                                                                         | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-337450 --wait=true                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:26 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ip      | addons-337450 ip                                                                            | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-337450 ssh curl -s                                                                   | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-337450 ssh cat                                                                       | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:26 UTC |
	|         | /opt/local-path-provisioner/pvc-17b0f281-1dfd-4035-a69d-f977b9bf0dd8_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-337450 addons                                                                        | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:26 UTC | 21 Apr 24 18:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-337450 addons                                                                        | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | -p addons-337450                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | addons-337450                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:27 UTC | 21 Apr 24 18:27 UTC |
	|         | -p addons-337450                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-337450 ip                                                                            | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:28 UTC | 21 Apr 24 18:28 UTC |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:28 UTC | 21 Apr 24 18:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-337450 addons disable                                                                | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:28 UTC | 21 Apr 24 18:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-337450 addons                                                                        | addons-337450        | jenkins | v1.33.0 | 21 Apr 24 18:32 UTC | 21 Apr 24 18:32 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:22:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:22:35.227011   12353 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:22:35.227110   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:22:35.227117   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:22:35.227136   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:22:35.227321   12353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:22:35.227937   12353 out.go:298] Setting JSON to false
	I0421 18:22:35.228773   12353 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":253,"bootTime":1713723502,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:22:35.228837   12353 start.go:139] virtualization: kvm guest
	I0421 18:22:35.231091   12353 out.go:177] * [addons-337450] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:22:35.232448   12353 notify.go:220] Checking for updates...
	I0421 18:22:35.232456   12353 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:22:35.233745   12353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:22:35.235090   12353 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:22:35.236482   12353 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:22:35.238011   12353 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:22:35.239735   12353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:22:35.241366   12353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:22:35.274999   12353 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 18:22:35.276383   12353 start.go:297] selected driver: kvm2
	I0421 18:22:35.276402   12353 start.go:901] validating driver "kvm2" against <nil>
	I0421 18:22:35.276418   12353 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:22:35.277169   12353 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:22:35.277271   12353 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:22:35.292389   12353 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:22:35.292454   12353 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:22:35.292671   12353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:22:35.292747   12353 cni.go:84] Creating CNI manager for ""
	I0421 18:22:35.292763   12353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:22:35.292774   12353 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 18:22:35.292845   12353 start.go:340] cluster config:
	{Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:22:35.292950   12353 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:22:35.294718   12353 out.go:177] * Starting "addons-337450" primary control-plane node in "addons-337450" cluster
	I0421 18:22:35.295942   12353 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:22:35.295985   12353 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:22:35.295999   12353 cache.go:56] Caching tarball of preloaded images
	I0421 18:22:35.296086   12353 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:22:35.296097   12353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:22:35.296420   12353 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/config.json ...
	I0421 18:22:35.296451   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/config.json: {Name:mke0896c50ea6ceabbcecb759314a92bd3d3edbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:22:35.296607   12353 start.go:360] acquireMachinesLock for addons-337450: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:22:35.296669   12353 start.go:364] duration metric: took 45.954µs to acquireMachinesLock for "addons-337450"
	I0421 18:22:35.296692   12353 start.go:93] Provisioning new machine with config: &{Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:addons-337450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:22:35.296763   12353 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 18:22:35.298476   12353 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0421 18:22:35.298633   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:22:35.298679   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:22:35.312953   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0421 18:22:35.313381   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:22:35.313930   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:22:35.313954   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:22:35.314294   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:22:35.314500   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:22:35.314636   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:22:35.314799   12353 start.go:159] libmachine.API.Create for "addons-337450" (driver="kvm2")
	I0421 18:22:35.314830   12353 client.go:168] LocalClient.Create starting
	I0421 18:22:35.314867   12353 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:22:35.352695   12353 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:22:35.587455   12353 main.go:141] libmachine: Running pre-create checks...
	I0421 18:22:35.587482   12353 main.go:141] libmachine: (addons-337450) Calling .PreCreateCheck
	I0421 18:22:35.588015   12353 main.go:141] libmachine: (addons-337450) Calling .GetConfigRaw
	I0421 18:22:35.588420   12353 main.go:141] libmachine: Creating machine...
	I0421 18:22:35.588434   12353 main.go:141] libmachine: (addons-337450) Calling .Create
	I0421 18:22:35.588590   12353 main.go:141] libmachine: (addons-337450) Creating KVM machine...
	I0421 18:22:35.589817   12353 main.go:141] libmachine: (addons-337450) DBG | found existing default KVM network
	I0421 18:22:35.590577   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.590431   12375 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0421 18:22:35.590598   12353 main.go:141] libmachine: (addons-337450) DBG | created network xml: 
	I0421 18:22:35.590611   12353 main.go:141] libmachine: (addons-337450) DBG | <network>
	I0421 18:22:35.590616   12353 main.go:141] libmachine: (addons-337450) DBG |   <name>mk-addons-337450</name>
	I0421 18:22:35.590624   12353 main.go:141] libmachine: (addons-337450) DBG |   <dns enable='no'/>
	I0421 18:22:35.590631   12353 main.go:141] libmachine: (addons-337450) DBG |   
	I0421 18:22:35.590641   12353 main.go:141] libmachine: (addons-337450) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0421 18:22:35.590652   12353 main.go:141] libmachine: (addons-337450) DBG |     <dhcp>
	I0421 18:22:35.590661   12353 main.go:141] libmachine: (addons-337450) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0421 18:22:35.590675   12353 main.go:141] libmachine: (addons-337450) DBG |     </dhcp>
	I0421 18:22:35.590705   12353 main.go:141] libmachine: (addons-337450) DBG |   </ip>
	I0421 18:22:35.590751   12353 main.go:141] libmachine: (addons-337450) DBG |   
	I0421 18:22:35.590768   12353 main.go:141] libmachine: (addons-337450) DBG | </network>
	I0421 18:22:35.590779   12353 main.go:141] libmachine: (addons-337450) DBG | 
	I0421 18:22:35.595996   12353 main.go:141] libmachine: (addons-337450) DBG | trying to create private KVM network mk-addons-337450 192.168.39.0/24...
	I0421 18:22:35.660360   12353 main.go:141] libmachine: (addons-337450) DBG | private KVM network mk-addons-337450 192.168.39.0/24 created
	I0421 18:22:35.660404   12353 main.go:141] libmachine: (addons-337450) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450 ...
	I0421 18:22:35.660432   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.660322   12375 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:22:35.660452   12353 main.go:141] libmachine: (addons-337450) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:22:35.660473   12353 main.go:141] libmachine: (addons-337450) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:22:35.908314   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.908177   12375 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa...
	I0421 18:22:35.969413   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.969294   12375 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/addons-337450.rawdisk...
	I0421 18:22:35.969451   12353 main.go:141] libmachine: (addons-337450) DBG | Writing magic tar header
	I0421 18:22:35.969466   12353 main.go:141] libmachine: (addons-337450) DBG | Writing SSH key tar header
	I0421 18:22:35.969475   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:35.969417   12375 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450 ...
	I0421 18:22:35.969532   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450
	I0421 18:22:35.969558   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450 (perms=drwx------)
	I0421 18:22:35.969573   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:22:35.969585   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:22:35.969600   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:22:35.969609   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:22:35.969618   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:22:35.969625   12353 main.go:141] libmachine: (addons-337450) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:22:35.969639   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:22:35.969652   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:22:35.969661   12353 main.go:141] libmachine: (addons-337450) Creating domain...
	I0421 18:22:35.969670   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:22:35.969683   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:22:35.969694   12353 main.go:141] libmachine: (addons-337450) DBG | Checking permissions on dir: /home
	I0421 18:22:35.969702   12353 main.go:141] libmachine: (addons-337450) DBG | Skipping /home - not owner
	I0421 18:22:35.970755   12353 main.go:141] libmachine: (addons-337450) define libvirt domain using xml: 
	I0421 18:22:35.970780   12353 main.go:141] libmachine: (addons-337450) <domain type='kvm'>
	I0421 18:22:35.970808   12353 main.go:141] libmachine: (addons-337450)   <name>addons-337450</name>
	I0421 18:22:35.970819   12353 main.go:141] libmachine: (addons-337450)   <memory unit='MiB'>4000</memory>
	I0421 18:22:35.970828   12353 main.go:141] libmachine: (addons-337450)   <vcpu>2</vcpu>
	I0421 18:22:35.970843   12353 main.go:141] libmachine: (addons-337450)   <features>
	I0421 18:22:35.970852   12353 main.go:141] libmachine: (addons-337450)     <acpi/>
	I0421 18:22:35.970859   12353 main.go:141] libmachine: (addons-337450)     <apic/>
	I0421 18:22:35.970867   12353 main.go:141] libmachine: (addons-337450)     <pae/>
	I0421 18:22:35.970880   12353 main.go:141] libmachine: (addons-337450)     
	I0421 18:22:35.970892   12353 main.go:141] libmachine: (addons-337450)   </features>
	I0421 18:22:35.970902   12353 main.go:141] libmachine: (addons-337450)   <cpu mode='host-passthrough'>
	I0421 18:22:35.970913   12353 main.go:141] libmachine: (addons-337450)   
	I0421 18:22:35.970924   12353 main.go:141] libmachine: (addons-337450)   </cpu>
	I0421 18:22:35.970937   12353 main.go:141] libmachine: (addons-337450)   <os>
	I0421 18:22:35.970946   12353 main.go:141] libmachine: (addons-337450)     <type>hvm</type>
	I0421 18:22:35.970976   12353 main.go:141] libmachine: (addons-337450)     <boot dev='cdrom'/>
	I0421 18:22:35.970993   12353 main.go:141] libmachine: (addons-337450)     <boot dev='hd'/>
	I0421 18:22:35.971003   12353 main.go:141] libmachine: (addons-337450)     <bootmenu enable='no'/>
	I0421 18:22:35.971018   12353 main.go:141] libmachine: (addons-337450)   </os>
	I0421 18:22:35.971033   12353 main.go:141] libmachine: (addons-337450)   <devices>
	I0421 18:22:35.971045   12353 main.go:141] libmachine: (addons-337450)     <disk type='file' device='cdrom'>
	I0421 18:22:35.971061   12353 main.go:141] libmachine: (addons-337450)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/boot2docker.iso'/>
	I0421 18:22:35.971072   12353 main.go:141] libmachine: (addons-337450)       <target dev='hdc' bus='scsi'/>
	I0421 18:22:35.971096   12353 main.go:141] libmachine: (addons-337450)       <readonly/>
	I0421 18:22:35.971114   12353 main.go:141] libmachine: (addons-337450)     </disk>
	I0421 18:22:35.971123   12353 main.go:141] libmachine: (addons-337450)     <disk type='file' device='disk'>
	I0421 18:22:35.971134   12353 main.go:141] libmachine: (addons-337450)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:22:35.971155   12353 main.go:141] libmachine: (addons-337450)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/addons-337450.rawdisk'/>
	I0421 18:22:35.971163   12353 main.go:141] libmachine: (addons-337450)       <target dev='hda' bus='virtio'/>
	I0421 18:22:35.971169   12353 main.go:141] libmachine: (addons-337450)     </disk>
	I0421 18:22:35.971176   12353 main.go:141] libmachine: (addons-337450)     <interface type='network'>
	I0421 18:22:35.971182   12353 main.go:141] libmachine: (addons-337450)       <source network='mk-addons-337450'/>
	I0421 18:22:35.971190   12353 main.go:141] libmachine: (addons-337450)       <model type='virtio'/>
	I0421 18:22:35.971195   12353 main.go:141] libmachine: (addons-337450)     </interface>
	I0421 18:22:35.971203   12353 main.go:141] libmachine: (addons-337450)     <interface type='network'>
	I0421 18:22:35.971209   12353 main.go:141] libmachine: (addons-337450)       <source network='default'/>
	I0421 18:22:35.971213   12353 main.go:141] libmachine: (addons-337450)       <model type='virtio'/>
	I0421 18:22:35.971226   12353 main.go:141] libmachine: (addons-337450)     </interface>
	I0421 18:22:35.971239   12353 main.go:141] libmachine: (addons-337450)     <serial type='pty'>
	I0421 18:22:35.971252   12353 main.go:141] libmachine: (addons-337450)       <target port='0'/>
	I0421 18:22:35.971263   12353 main.go:141] libmachine: (addons-337450)     </serial>
	I0421 18:22:35.971276   12353 main.go:141] libmachine: (addons-337450)     <console type='pty'>
	I0421 18:22:35.971295   12353 main.go:141] libmachine: (addons-337450)       <target type='serial' port='0'/>
	I0421 18:22:35.971306   12353 main.go:141] libmachine: (addons-337450)     </console>
	I0421 18:22:35.971314   12353 main.go:141] libmachine: (addons-337450)     <rng model='virtio'>
	I0421 18:22:35.971325   12353 main.go:141] libmachine: (addons-337450)       <backend model='random'>/dev/random</backend>
	I0421 18:22:35.971336   12353 main.go:141] libmachine: (addons-337450)     </rng>
	I0421 18:22:35.971349   12353 main.go:141] libmachine: (addons-337450)     
	I0421 18:22:35.971367   12353 main.go:141] libmachine: (addons-337450)     
	I0421 18:22:35.971450   12353 main.go:141] libmachine: (addons-337450)   </devices>
	I0421 18:22:35.971468   12353 main.go:141] libmachine: (addons-337450) </domain>
	I0421 18:22:35.971482   12353 main.go:141] libmachine: (addons-337450) 
	I0421 18:22:35.977957   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:4c:66:bb in network default
	I0421 18:22:35.978470   12353 main.go:141] libmachine: (addons-337450) Ensuring networks are active...
	I0421 18:22:35.978495   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:35.979065   12353 main.go:141] libmachine: (addons-337450) Ensuring network default is active
	I0421 18:22:35.979340   12353 main.go:141] libmachine: (addons-337450) Ensuring network mk-addons-337450 is active
	I0421 18:22:35.979887   12353 main.go:141] libmachine: (addons-337450) Getting domain xml...
	I0421 18:22:35.980491   12353 main.go:141] libmachine: (addons-337450) Creating domain...
	I0421 18:22:37.332521   12353 main.go:141] libmachine: (addons-337450) Waiting to get IP...
	I0421 18:22:37.333211   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:37.333642   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:37.333676   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:37.333622   12375 retry.go:31] will retry after 290.403397ms: waiting for machine to come up
	I0421 18:22:37.625299   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:37.625693   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:37.625744   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:37.625686   12375 retry.go:31] will retry after 302.232672ms: waiting for machine to come up
	I0421 18:22:37.929187   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:37.929647   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:37.929672   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:37.929592   12375 retry.go:31] will retry after 463.355197ms: waiting for machine to come up
	I0421 18:22:38.394034   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:38.394435   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:38.394460   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:38.394403   12375 retry.go:31] will retry after 526.97784ms: waiting for machine to come up
	I0421 18:22:38.922949   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:38.923405   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:38.923458   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:38.923367   12375 retry.go:31] will retry after 603.499708ms: waiting for machine to come up
	I0421 18:22:39.528321   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:39.528749   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:39.528781   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:39.528690   12375 retry.go:31] will retry after 632.935544ms: waiting for machine to come up
	I0421 18:22:40.163453   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:40.163890   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:40.163918   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:40.163837   12375 retry.go:31] will retry after 901.774974ms: waiting for machine to come up
	I0421 18:22:41.067580   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:41.067967   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:41.067997   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:41.067909   12375 retry.go:31] will retry after 1.413543626s: waiting for machine to come up
	I0421 18:22:42.483305   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:42.483709   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:42.483731   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:42.483675   12375 retry.go:31] will retry after 1.750079619s: waiting for machine to come up
	I0421 18:22:44.236604   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:44.237041   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:44.237064   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:44.236973   12375 retry.go:31] will retry after 1.402403396s: waiting for machine to come up
	I0421 18:22:45.641454   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:45.641830   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:45.641862   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:45.641765   12375 retry.go:31] will retry after 2.357370138s: waiting for machine to come up
	I0421 18:22:48.002442   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:48.002965   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:48.002986   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:48.002922   12375 retry.go:31] will retry after 3.525566649s: waiting for machine to come up
	I0421 18:22:51.530143   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:51.530573   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:51.530629   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:51.530587   12375 retry.go:31] will retry after 4.023576525s: waiting for machine to come up
	I0421 18:22:55.555680   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:22:55.556097   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find current IP address of domain addons-337450 in network mk-addons-337450
	I0421 18:22:55.556141   12353 main.go:141] libmachine: (addons-337450) DBG | I0421 18:22:55.556090   12375 retry.go:31] will retry after 5.658995234s: waiting for machine to come up
	I0421 18:23:01.216683   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.217149   12353 main.go:141] libmachine: (addons-337450) Found IP for machine: 192.168.39.51
	I0421 18:23:01.217170   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has current primary IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.217176   12353 main.go:141] libmachine: (addons-337450) Reserving static IP address...
	I0421 18:23:01.217556   12353 main.go:141] libmachine: (addons-337450) DBG | unable to find host DHCP lease matching {name: "addons-337450", mac: "52:54:00:b4:47:66", ip: "192.168.39.51"} in network mk-addons-337450
	I0421 18:23:01.288523   12353 main.go:141] libmachine: (addons-337450) DBG | Getting to WaitForSSH function...
	I0421 18:23:01.288554   12353 main.go:141] libmachine: (addons-337450) Reserved static IP address: 192.168.39.51
	I0421 18:23:01.288567   12353 main.go:141] libmachine: (addons-337450) Waiting for SSH to be available...
	I0421 18:23:01.291326   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.291644   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.291677   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.291970   12353 main.go:141] libmachine: (addons-337450) DBG | Using SSH client type: external
	I0421 18:23:01.292001   12353 main.go:141] libmachine: (addons-337450) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa (-rw-------)
	I0421 18:23:01.292035   12353 main.go:141] libmachine: (addons-337450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:23:01.292051   12353 main.go:141] libmachine: (addons-337450) DBG | About to run SSH command:
	I0421 18:23:01.292064   12353 main.go:141] libmachine: (addons-337450) DBG | exit 0
	I0421 18:23:01.422675   12353 main.go:141] libmachine: (addons-337450) DBG | SSH cmd err, output: <nil>: 
	I0421 18:23:01.422979   12353 main.go:141] libmachine: (addons-337450) KVM machine creation complete!
	I0421 18:23:01.423357   12353 main.go:141] libmachine: (addons-337450) Calling .GetConfigRaw
	I0421 18:23:01.423911   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:01.424103   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:01.424277   12353 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:23:01.424291   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:01.425451   12353 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:23:01.425467   12353 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:23:01.425476   12353 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:23:01.425486   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.427889   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.428246   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.428278   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.428413   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.428595   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.428767   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.428920   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.429064   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.429310   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.429328   12353 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:23:01.529757   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:23:01.529783   12353 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:23:01.529796   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.532350   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.532655   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.532685   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.532799   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.532961   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.533104   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.533220   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.533396   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.533551   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.533562   12353 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:23:01.635682   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:23:01.635755   12353 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:23:01.635764   12353 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:23:01.635780   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:23:01.636039   12353 buildroot.go:166] provisioning hostname "addons-337450"
	I0421 18:23:01.636063   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:23:01.636237   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.638757   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.639142   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.639166   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.639263   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.639428   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.639577   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.639685   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.639832   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.640036   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.640050   12353 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-337450 && echo "addons-337450" | sudo tee /etc/hostname
	I0421 18:23:01.756328   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-337450
	
	I0421 18:23:01.756361   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.759061   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.759386   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.759418   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.759556   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:01.759739   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.759896   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:01.760042   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:01.760168   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:01.760325   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:01.760340   12353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-337450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-337450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-337450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:23:01.873319   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:23:01.873351   12353 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:23:01.873393   12353 buildroot.go:174] setting up certificates
	I0421 18:23:01.873403   12353 provision.go:84] configureAuth start
	I0421 18:23:01.873415   12353 main.go:141] libmachine: (addons-337450) Calling .GetMachineName
	I0421 18:23:01.873702   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:01.876424   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.876764   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.876784   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.876953   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:01.878885   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.879186   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:01.879219   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:01.879329   12353 provision.go:143] copyHostCerts
	I0421 18:23:01.879407   12353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:23:01.879549   12353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:23:01.879641   12353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:23:01.879743   12353 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.addons-337450 san=[127.0.0.1 192.168.39.51 addons-337450 localhost minikube]
	I0421 18:23:02.000631   12353 provision.go:177] copyRemoteCerts
	I0421 18:23:02.000699   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:23:02.000734   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.003339   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.003610   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.003638   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.003778   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.003981   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.004160   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.004298   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.085503   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:23:02.113780   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 18:23:02.141526   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 18:23:02.167514   12353 provision.go:87] duration metric: took 294.100021ms to configureAuth
	I0421 18:23:02.167539   12353 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:23:02.167747   12353 config.go:182] Loaded profile config "addons-337450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:23:02.167835   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.170334   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.170821   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.170857   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.171029   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.171236   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.171433   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.171649   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.171852   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:02.172056   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:02.172072   12353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:23:02.471217   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:23:02.471239   12353 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:23:02.471246   12353 main.go:141] libmachine: (addons-337450) Calling .GetURL
	I0421 18:23:02.472521   12353 main.go:141] libmachine: (addons-337450) DBG | Using libvirt version 6000000
	I0421 18:23:02.475007   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.475422   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.475451   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.475637   12353 main.go:141] libmachine: Docker is up and running!
	I0421 18:23:02.475652   12353 main.go:141] libmachine: Reticulating splines...
	I0421 18:23:02.475658   12353 client.go:171] duration metric: took 27.160821013s to LocalClient.Create
	I0421 18:23:02.475679   12353 start.go:167] duration metric: took 27.160882242s to libmachine.API.Create "addons-337450"
	I0421 18:23:02.475697   12353 start.go:293] postStartSetup for "addons-337450" (driver="kvm2")
	I0421 18:23:02.475710   12353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:23:02.475726   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.475998   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:23:02.476020   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.478437   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.478934   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.478960   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.479100   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.479296   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.479469   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.479707   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.562410   12353 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:23:02.567270   12353 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:23:02.567294   12353 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:23:02.567373   12353 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:23:02.567405   12353 start.go:296] duration metric: took 91.700109ms for postStartSetup
	I0421 18:23:02.567437   12353 main.go:141] libmachine: (addons-337450) Calling .GetConfigRaw
	I0421 18:23:02.567976   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:02.570924   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.571584   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.571609   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.571880   12353 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/config.json ...
	I0421 18:23:02.572068   12353 start.go:128] duration metric: took 27.275295251s to createHost
	I0421 18:23:02.572093   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.574438   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.574829   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.574859   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.574995   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.575184   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.575332   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.575472   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.575643   12353 main.go:141] libmachine: Using SSH client type: native
	I0421 18:23:02.575820   12353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0421 18:23:02.575830   12353 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:23:02.675669   12353 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713723782.649027493
	
	I0421 18:23:02.675692   12353 fix.go:216] guest clock: 1713723782.649027493
	I0421 18:23:02.675700   12353 fix.go:229] Guest: 2024-04-21 18:23:02.649027493 +0000 UTC Remote: 2024-04-21 18:23:02.572081139 +0000 UTC m=+27.390275697 (delta=76.946354ms)
	I0421 18:23:02.675735   12353 fix.go:200] guest clock delta is within tolerance: 76.946354ms
	I0421 18:23:02.675740   12353 start.go:83] releasing machines lock for "addons-337450", held for 27.379060586s
	I0421 18:23:02.675758   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.675995   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:02.678400   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.678723   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.678747   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.678940   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.679396   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.679558   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:02.679638   12353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:23:02.679682   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.679745   12353 ssh_runner.go:195] Run: cat /version.json
	I0421 18:23:02.679769   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:02.682106   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682328   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682451   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.682477   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682574   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.682742   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:02.682767   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.682774   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:02.682886   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:02.682940   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.683027   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:02.683085   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.683147   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:02.683249   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:02.817958   12353 ssh_runner.go:195] Run: systemctl --version
	I0421 18:23:02.824467   12353 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:23:02.989445   12353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:23:02.997276   12353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:23:02.997349   12353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:23:03.014851   12353 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:23:03.014878   12353 start.go:494] detecting cgroup driver to use...
	I0421 18:23:03.014947   12353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:23:03.031066   12353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:23:03.045566   12353 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:23:03.045618   12353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:23:03.059952   12353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:23:03.074163   12353 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:23:03.191599   12353 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:23:03.336464   12353 docker.go:233] disabling docker service ...
	I0421 18:23:03.336548   12353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:23:03.353356   12353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:23:03.367747   12353 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:23:03.512164   12353 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:23:03.650757   12353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:23:03.666983   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:23:03.688494   12353 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:23:03.688566   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.701349   12353 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:23:03.701428   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.715725   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.728486   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.747516   12353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:23:03.759239   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.770359   12353 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.789434   12353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:23:03.800720   12353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:23:03.811266   12353 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:23:03.811332   12353 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:23:03.827668   12353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:23:03.838782   12353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:23:03.963880   12353 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:23:04.114123   12353 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:23:04.114211   12353 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:23:04.119626   12353 start.go:562] Will wait 60s for crictl version
	I0421 18:23:04.119682   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:23:04.123803   12353 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:23:04.165462   12353 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:23:04.165573   12353 ssh_runner.go:195] Run: crio --version
	I0421 18:23:04.196870   12353 ssh_runner.go:195] Run: crio --version
	I0421 18:23:04.229837   12353 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:23:04.231352   12353 main.go:141] libmachine: (addons-337450) Calling .GetIP
	I0421 18:23:04.234111   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:04.234416   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:04.234451   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:04.234620   12353 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:23:04.239188   12353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:23:04.252618   12353 kubeadm.go:877] updating cluster {Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:23:04.252802   12353 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:23:04.252862   12353 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:23:04.292502   12353 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 18:23:04.292573   12353 ssh_runner.go:195] Run: which lz4
	I0421 18:23:04.297062   12353 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 18:23:04.301681   12353 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 18:23:04.301717   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 18:23:05.912822   12353 crio.go:462] duration metric: took 1.615791433s to copy over tarball
	I0421 18:23:05.912906   12353 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 18:23:08.547171   12353 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634239198s)
	I0421 18:23:08.547198   12353 crio.go:469] duration metric: took 2.634350292s to extract the tarball
	I0421 18:23:08.547208   12353 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 18:23:08.587022   12353 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:23:08.637424   12353 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:23:08.637447   12353 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:23:08.637457   12353 kubeadm.go:928] updating node { 192.168.39.51 8443 v1.30.0 crio true true} ...
	I0421 18:23:08.637573   12353 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-337450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:23:08.637662   12353 ssh_runner.go:195] Run: crio config
	I0421 18:23:08.684573   12353 cni.go:84] Creating CNI manager for ""
	I0421 18:23:08.684596   12353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:23:08.684608   12353 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:23:08.684627   12353 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-337450 NodeName:addons-337450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:23:08.684750   12353 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-337450"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:23:08.684808   12353 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:23:08.696489   12353 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:23:08.696564   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 18:23:08.707350   12353 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0421 18:23:08.726272   12353 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:23:08.745532   12353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0421 18:23:08.764282   12353 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0421 18:23:08.768717   12353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:23:08.782658   12353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:23:08.910083   12353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:23:08.930315   12353 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450 for IP: 192.168.39.51
	I0421 18:23:08.930342   12353 certs.go:194] generating shared ca certs ...
	I0421 18:23:08.930363   12353 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:08.930522   12353 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:23:09.066629   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt ...
	I0421 18:23:09.066659   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt: {Name:mk5a664d977aab951980c9523c0f69eb4aa7a00f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.066826   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key ...
	I0421 18:23:09.066841   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key: {Name:mk3fcec5c20999d335d6a5dac5fc16bf27da2984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.066912   12353 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:23:09.179092   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt ...
	I0421 18:23:09.179120   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt: {Name:mkf45db38f5b63b2dcc8473373bea520935f8d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.179286   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key ...
	I0421 18:23:09.179298   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key: {Name:mk7e58d4cac388d3c1580b19b2d8fcf71f4dba03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.179370   12353 certs.go:256] generating profile certs ...
	I0421 18:23:09.179422   12353 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.key
	I0421 18:23:09.179436   12353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt with IP's: []
	I0421 18:23:09.422548   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt ...
	I0421 18:23:09.422575   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: {Name:mkb95799b2bcb246ea2be7e267ed3faffc78c639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.422731   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.key ...
	I0421 18:23:09.422742   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.key: {Name:mk74672d5b54e5c9788d1c06d12e69cbba120437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.422809   12353 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d
	I0421 18:23:09.422826   12353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.51]
	I0421 18:23:09.549381   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d ...
	I0421 18:23:09.549413   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d: {Name:mk87574e1d2cfde51605eb05f68cb97f5958443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.549577   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d ...
	I0421 18:23:09.549591   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d: {Name:mk456d7f4be5166d19cb0fa70f5d92c8d40a09ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.549663   12353 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt.19a3603d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt
	I0421 18:23:09.549756   12353 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key.19a3603d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key
	I0421 18:23:09.549806   12353 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key
	I0421 18:23:09.549823   12353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt with IP's: []
	I0421 18:23:09.642270   12353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt ...
	I0421 18:23:09.642308   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt: {Name:mkb6d378f694b6ad483fa038d205e6585b0f80ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.642463   12353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key ...
	I0421 18:23:09.642474   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key: {Name:mke7cbd9b1e9905297223b503bbc4c5986fcca05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:09.642619   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:23:09.642653   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:23:09.642679   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:23:09.642722   12353 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:23:09.643336   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:23:09.677535   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:23:09.709139   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:23:09.742754   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:23:09.772085   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0421 18:23:09.802198   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:23:09.832739   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:23:09.861749   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 18:23:09.891829   12353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:23:09.923330   12353 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:23:09.945934   12353 ssh_runner.go:195] Run: openssl version
	I0421 18:23:09.953391   12353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:23:09.967785   12353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:23:09.973575   12353 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:23:09.973629   12353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:23:09.980758   12353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:23:09.995455   12353 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:23:10.000508   12353 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:23:10.000558   12353 kubeadm.go:391] StartCluster: {Name:addons-337450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-337450 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:23:10.000625   12353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 18:23:10.000667   12353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 18:23:10.042788   12353 cri.go:89] found id: ""
	I0421 18:23:10.042862   12353 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 18:23:10.055629   12353 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 18:23:10.068449   12353 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 18:23:10.080493   12353 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 18:23:10.080523   12353 kubeadm.go:156] found existing configuration files:
	
	I0421 18:23:10.080610   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 18:23:10.092266   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 18:23:10.092344   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 18:23:10.105313   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 18:23:10.117055   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 18:23:10.117107   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 18:23:10.132622   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 18:23:10.146228   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 18:23:10.146283   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 18:23:10.157504   12353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 18:23:10.167867   12353 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 18:23:10.167932   12353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 18:23:10.178596   12353 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 18:23:10.372860   12353 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 18:23:20.562049   12353 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 18:23:20.562181   12353 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 18:23:20.562288   12353 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 18:23:20.562416   12353 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 18:23:20.562543   12353 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 18:23:20.562740   12353 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 18:23:20.564408   12353 out.go:204]   - Generating certificates and keys ...
	I0421 18:23:20.564474   12353 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 18:23:20.564523   12353 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 18:23:20.564586   12353 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 18:23:20.564653   12353 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 18:23:20.564717   12353 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 18:23:20.564775   12353 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 18:23:20.564871   12353 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 18:23:20.565030   12353 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-337450 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0421 18:23:20.565112   12353 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 18:23:20.565265   12353 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-337450 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0421 18:23:20.565322   12353 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 18:23:20.565373   12353 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 18:23:20.565424   12353 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 18:23:20.565467   12353 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 18:23:20.565508   12353 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 18:23:20.565552   12353 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 18:23:20.565594   12353 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 18:23:20.565646   12353 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 18:23:20.565688   12353 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 18:23:20.565758   12353 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 18:23:20.565813   12353 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 18:23:20.567163   12353 out.go:204]   - Booting up control plane ...
	I0421 18:23:20.567233   12353 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 18:23:20.567310   12353 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 18:23:20.567377   12353 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 18:23:20.567466   12353 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 18:23:20.567559   12353 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 18:23:20.567605   12353 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 18:23:20.567724   12353 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 18:23:20.567792   12353 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 18:23:20.567851   12353 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.043039ms
	I0421 18:23:20.567913   12353 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 18:23:20.567969   12353 kubeadm.go:309] [api-check] The API server is healthy after 5.502975082s
	I0421 18:23:20.568080   12353 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 18:23:20.568226   12353 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 18:23:20.568319   12353 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 18:23:20.568570   12353 kubeadm.go:309] [mark-control-plane] Marking the node addons-337450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 18:23:20.568624   12353 kubeadm.go:309] [bootstrap-token] Using token: intyc2.kpq50nnam4k5x17k
	I0421 18:23:20.570983   12353 out.go:204]   - Configuring RBAC rules ...
	I0421 18:23:20.571065   12353 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 18:23:20.571164   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 18:23:20.571312   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 18:23:20.571475   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 18:23:20.571641   12353 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 18:23:20.571714   12353 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 18:23:20.571814   12353 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 18:23:20.571854   12353 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 18:23:20.571901   12353 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 18:23:20.571915   12353 kubeadm.go:309] 
	I0421 18:23:20.571960   12353 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 18:23:20.571965   12353 kubeadm.go:309] 
	I0421 18:23:20.572022   12353 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 18:23:20.572028   12353 kubeadm.go:309] 
	I0421 18:23:20.572054   12353 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 18:23:20.572109   12353 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 18:23:20.572149   12353 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 18:23:20.572155   12353 kubeadm.go:309] 
	I0421 18:23:20.572196   12353 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 18:23:20.572202   12353 kubeadm.go:309] 
	I0421 18:23:20.572240   12353 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 18:23:20.572246   12353 kubeadm.go:309] 
	I0421 18:23:20.572287   12353 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 18:23:20.572355   12353 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 18:23:20.572412   12353 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 18:23:20.572421   12353 kubeadm.go:309] 
	I0421 18:23:20.572487   12353 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 18:23:20.572548   12353 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 18:23:20.572554   12353 kubeadm.go:309] 
	I0421 18:23:20.572618   12353 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token intyc2.kpq50nnam4k5x17k \
	I0421 18:23:20.572704   12353 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 18:23:20.572736   12353 kubeadm.go:309] 	--control-plane 
	I0421 18:23:20.572746   12353 kubeadm.go:309] 
	I0421 18:23:20.572811   12353 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 18:23:20.572820   12353 kubeadm.go:309] 
	I0421 18:23:20.572884   12353 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token intyc2.kpq50nnam4k5x17k \
	I0421 18:23:20.572973   12353 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 18:23:20.572982   12353 cni.go:84] Creating CNI manager for ""
	I0421 18:23:20.572988   12353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:23:20.574483   12353 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 18:23:20.575616   12353 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 18:23:20.588196   12353 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 18:23:20.615993   12353 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 18:23:20.616113   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:20.616205   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-337450 minikube.k8s.io/updated_at=2024_04_21T18_23_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=addons-337450 minikube.k8s.io/primary=true
	I0421 18:23:20.642160   12353 ops.go:34] apiserver oom_adj: -16
	I0421 18:23:20.779027   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:21.279229   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:21.779104   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:22.280005   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:22.779196   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:23.279473   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:23.779537   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:24.279267   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:24.780099   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:25.279158   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:25.779797   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:26.279210   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:26.779782   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:27.279932   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:27.779775   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:28.279661   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:28.779170   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:29.279304   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:29.779891   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:30.279572   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:30.779479   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:31.279995   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:31.779482   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:32.279914   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:32.779906   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:33.279456   12353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:23:33.424816   12353 kubeadm.go:1107] duration metric: took 12.808775729s to wait for elevateKubeSystemPrivileges
	W0421 18:23:33.424877   12353 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 18:23:33.424888   12353 kubeadm.go:393] duration metric: took 23.424333542s to StartCluster
	I0421 18:23:33.424913   12353 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:33.425074   12353 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:23:33.425591   12353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:33.425774   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 18:23:33.425796   12353 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:23:33.427605   12353 out.go:177] * Verifying Kubernetes components...
	I0421 18:23:33.425860   12353 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0421 18:23:33.426036   12353 config.go:182] Loaded profile config "addons-337450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:23:33.429046   12353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:23:33.429082   12353 addons.go:69] Setting yakd=true in profile "addons-337450"
	I0421 18:23:33.429094   12353 addons.go:69] Setting cloud-spanner=true in profile "addons-337450"
	I0421 18:23:33.429112   12353 addons.go:69] Setting helm-tiller=true in profile "addons-337450"
	I0421 18:23:33.429121   12353 addons.go:69] Setting inspektor-gadget=true in profile "addons-337450"
	I0421 18:23:33.429134   12353 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-337450"
	I0421 18:23:33.429143   12353 addons.go:234] Setting addon cloud-spanner=true in "addons-337450"
	I0421 18:23:33.429148   12353 addons.go:234] Setting addon helm-tiller=true in "addons-337450"
	I0421 18:23:33.429153   12353 addons.go:234] Setting addon inspektor-gadget=true in "addons-337450"
	I0421 18:23:33.429162   12353 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-337450"
	I0421 18:23:33.429157   12353 addons.go:69] Setting registry=true in profile "addons-337450"
	I0421 18:23:33.429183   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429125   12353 addons.go:69] Setting storage-provisioner=true in profile "addons-337450"
	I0421 18:23:33.429194   12353 addons.go:234] Setting addon registry=true in "addons-337450"
	I0421 18:23:33.429185   12353 addons.go:69] Setting volumesnapshots=true in profile "addons-337450"
	I0421 18:23:33.429204   12353 addons.go:234] Setting addon storage-provisioner=true in "addons-337450"
	I0421 18:23:33.429213   12353 addons.go:69] Setting gcp-auth=true in profile "addons-337450"
	I0421 18:23:33.429194   12353 addons.go:69] Setting default-storageclass=true in profile "addons-337450"
	I0421 18:23:33.429233   12353 addons.go:69] Setting metrics-server=true in profile "addons-337450"
	I0421 18:23:33.429244   12353 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-337450"
	I0421 18:23:33.429250   12353 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-337450"
	I0421 18:23:33.429262   12353 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-337450"
	I0421 18:23:33.429295   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429115   12353 addons.go:234] Setting addon yakd=true in "addons-337450"
	I0421 18:23:33.429384   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429084   12353 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-337450"
	I0421 18:23:33.429459   12353 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-337450"
	I0421 18:23:33.429486   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429595   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429605   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429622   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429629   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429655   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429665   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429183   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429253   12353 addons.go:234] Setting addon metrics-server=true in "addons-337450"
	I0421 18:23:33.429245   12353 mustload.go:65] Loading cluster: addons-337450
	I0421 18:23:33.429794   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429800   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429820   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429898   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429934   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429951   12353 config.go:182] Loaded profile config "addons-337450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:23:33.430075   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.429183   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430098   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429635   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429216   12353 addons.go:234] Setting addon volumesnapshots=true in "addons-337450"
	I0421 18:23:33.430237   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430295   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.430321   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.430459   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.430493   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429224   12353 addons.go:69] Setting ingress=true in profile "addons-337450"
	I0421 18:23:33.430565   12353 addons.go:234] Setting addon ingress=true in "addons-337450"
	I0421 18:23:33.430603   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430923   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.430941   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.431006   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.431159   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.431191   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429232   12353 addons.go:69] Setting ingress-dns=true in profile "addons-337450"
	I0421 18:23:33.431280   12353 addons.go:234] Setting addon ingress-dns=true in "addons-337450"
	I0421 18:23:33.431311   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.429224   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.430193   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.431612   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.429230   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.450140   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46223
	I0421 18:23:33.450223   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0421 18:23:33.451165   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.451219   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.451715   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.451728   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.451737   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.451743   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.452107   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.452251   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.452301   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.452796   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.452829   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.454710   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0421 18:23:33.455127   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.455643   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.455672   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.456006   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.456189   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.456658   12353 addons.go:234] Setting addon default-storageclass=true in "addons-337450"
	I0421 18:23:33.456702   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.457102   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.457137   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.458686   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.458733   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.458968   12353 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-337450"
	I0421 18:23:33.459014   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.459344   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.459389   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.459356   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.459471   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.459888   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.459923   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.462921   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0421 18:23:33.463129   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I0421 18:23:33.463383   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.463827   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.463845   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.464046   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.464317   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.464830   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.464855   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.465449   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.465466   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.465863   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.466378   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.466418   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.480850   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I0421 18:23:33.481544   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.482246   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.482268   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.482619   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.482775   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38863
	I0421 18:23:33.482936   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0421 18:23:33.483235   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.483312   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.483324   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.483360   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.483592   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0421 18:23:33.483727   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.483746   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.483745   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.483793   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.484072   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.484445   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.484652   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.484734   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.484778   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.485141   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.485186   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.486848   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.486866   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.487218   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.493151   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.493182   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.496182   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0421 18:23:33.496347   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0421 18:23:33.496872   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.497381   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.497405   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.497738   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.497911   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.499701   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.500293   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.500892   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.500914   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.502030   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.502359   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.504184   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:33.504582   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.504602   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.507251   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I0421 18:23:33.509482   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0421 18:23:33.508065   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.512185   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0421 18:23:33.511376   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.513533   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.513591   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0421 18:23:33.514039   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.515232   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0421 18:23:33.516857   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0421 18:23:33.515911   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.519977   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0421 18:23:33.521197   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0421 18:23:33.520218   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.520955   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0421 18:23:33.523749   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0421 18:23:33.524992   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0421 18:23:33.525008   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0421 18:23:33.525029   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.526268   12353 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0421 18:23:33.523157   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.523566   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0421 18:23:33.525907   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0421 18:23:33.527317   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0421 18:23:33.527895   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0421 18:23:33.528419   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0421 18:23:33.528432   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0421 18:23:33.528451   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.528246   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.530694   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.530708   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38683
	I0421 18:23:33.530717   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.530698   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.530827   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.530855   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.531113   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.531142   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.531152   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.531506   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.531553   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.531640   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.531647   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.531662   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.531674   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.532115   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.532229   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.532243   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.532398   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.532412   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.532476   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.532688   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.532758   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.533290   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.533299   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.533345   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.533351   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.533439   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.533452   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.533750   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.533769   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.533824   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.533848   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.534007   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.534224   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.534242   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.534268   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.534281   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.534297   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.534833   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.535025   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.535156   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.535260   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.535357   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.535787   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.535845   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.537860   12353 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0421 18:23:33.540222   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.537482   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.537593   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.539315   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0421 18:23:33.539675   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41667
	I0421 18:23:33.540504   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0421 18:23:33.541534   12353 out.go:177]   - Using image docker.io/busybox:stable
	I0421 18:23:33.542283   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.542285   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.542430   12353 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 18:23:33.542682   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.543338   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45107
	I0421 18:23:33.543530   12353 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0421 18:23:33.543636   12353 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0421 18:23:33.544132   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.544943   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.544996   12353 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0421 18:23:33.546446   12353 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0421 18:23:33.546461   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0421 18:23:33.546477   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.547994   12353 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0421 18:23:33.548011   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0421 18:23:33.548030   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.545131   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0421 18:23:33.548089   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.544230   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.548128   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.545142   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 18:23:33.548166   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.545969   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.548190   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.546111   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.546326   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.546689   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0421 18:23:33.549340   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.549358   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.549409   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.549500   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.550100   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.550136   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.550690   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.550697   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.550721   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.551178   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.551201   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.551271   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.551477   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0421 18:23:33.551500   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.551560   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.551988   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.552026   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.552240   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.552520   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.552624   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.552649   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.552750   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.553082   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.553118   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.553285   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.553446   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.553572   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.554342   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.554422   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.554436   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.554541   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.554569   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.555381   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.555411   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0421 18:23:33.555469   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.555498   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.555827   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.555847   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.555851   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.556115   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.556155   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.556222   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:33.556247   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:33.556415   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.556669   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.556732   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.556800   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.557016   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.557342   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.557360   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.557641   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0421 18:23:33.557782   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0421 18:23:33.558159   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.558577   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.558637   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.558649   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.558664   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.559032   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.559049   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.559114   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.559133   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.559223   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.559390   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.559433   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.559541   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.559566   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.559593   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.560162   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.560629   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.560828   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.561001   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.561479   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.561740   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.563846   12353 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0421 18:23:33.565085   12353 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0421 18:23:33.562847   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.566305   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0421 18:23:33.568530   12353 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0421 18:23:33.567448   12353 out.go:177]   - Using image docker.io/registry:2.8.3
	I0421 18:23:33.567465   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0421 18:23:33.569782   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.569927   12353 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0421 18:23:33.569936   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0421 18:23:33.569962   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.572118   12353 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0421 18:23:33.572135   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0421 18:23:33.572156   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.574934   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.575680   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.576500   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.576526   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.576758   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.576783   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.576960   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.577135   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.577181   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.577349   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.577399   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.577643   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.577924   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.578037   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0421 18:23:33.578293   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.578520   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.578548   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.578840   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.578871   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.578940   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.579112   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.579235   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.579347   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.579684   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.579693   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.580019   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.580186   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.581511   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.583621   12353 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0421 18:23:33.585030   12353 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0421 18:23:33.585045   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0421 18:23:33.585062   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.587674   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.587996   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.588009   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.588129   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.588297   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.588460   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.588605   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.590892   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0421 18:23:33.591341   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.592402   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.592421   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.592789   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.592865   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36299
	I0421 18:23:33.592889   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I0421 18:23:33.593335   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.593347   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.593394   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.594305   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.594320   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.594322   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.594337   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.594685   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.594919   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.595156   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.595316   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.595530   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
	I0421 18:23:33.595965   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:33.596771   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.596921   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:33.596934   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:33.598969   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0421 18:23:33.597295   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:33.597316   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.597905   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.602663   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:23:33.601374   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:33.604242   12353 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:23:33.605161   12353 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:23:33.605175   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 18:23:33.605191   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.610201   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:23:33.604295   12353 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0421 18:23:33.606940   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:33.608259   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.608835   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.611691   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.612979   12353 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0421 18:23:33.612991   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0421 18:23:33.611785   12353 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0421 18:23:33.613021   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0421 18:23:33.613039   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.611812   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.613086   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.614473   12353 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0421 18:23:33.611969   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.613005   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.615803   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 18:23:33.615815   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 18:23:33.615834   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:33.616021   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.616289   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.616836   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.616859   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.618306   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.618465   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.618626   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.618749   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.619539   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.619914   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.619933   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.620031   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.620144   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.620253   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.620342   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.620524   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.620792   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:33.620822   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:33.621050   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:33.621162   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:33.621286   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:33.621394   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:33.967147   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0421 18:23:33.984189   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0421 18:23:33.984212   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0421 18:23:34.020380   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0421 18:23:34.134156   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:23:34.167279   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0421 18:23:34.179870   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0421 18:23:34.225203   12353 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0421 18:23:34.225226   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0421 18:23:34.227900   12353 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0421 18:23:34.227920   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0421 18:23:34.245806   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 18:23:34.255276   12353 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0421 18:23:34.255303   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0421 18:23:34.258271   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 18:23:34.258291   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0421 18:23:34.261142   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0421 18:23:34.261158   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0421 18:23:34.275699   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0421 18:23:34.284031   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0421 18:23:34.284050   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0421 18:23:34.310635   12353 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0421 18:23:34.310658   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0421 18:23:34.376390   12353 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0421 18:23:34.376409   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0421 18:23:34.405862   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 18:23:34.405885   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 18:23:34.430877   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0421 18:23:34.430904   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0421 18:23:34.463086   12353 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.037283077s)
	I0421 18:23:34.463122   12353 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.034051205s)
	I0421 18:23:34.463183   12353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:23:34.463254   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 18:23:34.477680   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0421 18:23:34.477700   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0421 18:23:34.497615   12353 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0421 18:23:34.497638   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0421 18:23:34.569616   12353 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0421 18:23:34.569639   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0421 18:23:34.574884   12353 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0421 18:23:34.574902   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0421 18:23:34.598290   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0421 18:23:34.598308   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0421 18:23:34.627119   12353 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0421 18:23:34.627144   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0421 18:23:34.664234   12353 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0421 18:23:34.664256   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0421 18:23:34.707872   12353 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 18:23:34.707897   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 18:23:34.726284   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0421 18:23:34.795968   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0421 18:23:34.795989   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0421 18:23:34.856966   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 18:23:34.905616   12353 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0421 18:23:34.905648   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0421 18:23:34.913190   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0421 18:23:34.994288   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0421 18:23:34.994318   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0421 18:23:35.013022   12353 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0421 18:23:35.013045   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0421 18:23:35.090172   12353 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0421 18:23:35.090190   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0421 18:23:35.161726   12353 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0421 18:23:35.161749   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0421 18:23:35.338273   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0421 18:23:35.338294   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0421 18:23:35.429232   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0421 18:23:35.476772   12353 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:23:35.476793   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0421 18:23:35.585415   12353 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0421 18:23:35.585437   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0421 18:23:35.619649   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0421 18:23:35.619675   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0421 18:23:35.864060   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.896878725s)
	I0421 18:23:35.864110   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:35.864119   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:35.864431   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:35.864451   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:35.864462   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:35.864475   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:35.864697   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:35.864723   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:35.864742   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:35.899840   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:23:35.980104   12353 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0421 18:23:35.980133   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0421 18:23:35.985940   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0421 18:23:35.985971   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0421 18:23:36.285515   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0421 18:23:36.362818   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0421 18:23:36.362845   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0421 18:23:36.602741   12353 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0421 18:23:36.602772   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0421 18:23:37.106348   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0421 18:23:37.905532   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.885113672s)
	I0421 18:23:37.905586   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:37.905598   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:37.905895   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:37.905919   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:37.905929   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:37.905937   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:37.906219   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:37.906236   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:37.906255   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.034543   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.900354544s)
	I0421 18:23:39.034581   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.034589   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.034608   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.867298277s)
	I0421 18:23:39.034650   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.034666   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.034895   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.034910   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:39.034919   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.034927   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.034951   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.034969   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.034980   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:39.034993   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:39.035000   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:39.035054   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.035179   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.035187   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:39.035317   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:39.035329   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:39.035344   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:40.570318   12353 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0421 18:23:40.570358   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:40.574148   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:40.574587   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:40.574611   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:40.574800   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:40.575024   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:40.575193   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:40.575332   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:41.317473   12353 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0421 18:23:41.386155   12353 addons.go:234] Setting addon gcp-auth=true in "addons-337450"
	I0421 18:23:41.386213   12353 host.go:66] Checking if "addons-337450" exists ...
	I0421 18:23:41.386564   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:41.386594   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:41.402217   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0421 18:23:41.402723   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:41.403184   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:41.403212   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:41.403559   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:41.404138   12353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:23:41.404193   12353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:23:41.418969   12353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0421 18:23:41.419374   12353 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:23:41.419870   12353 main.go:141] libmachine: Using API Version  1
	I0421 18:23:41.419890   12353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:23:41.420236   12353 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:23:41.420436   12353 main.go:141] libmachine: (addons-337450) Calling .GetState
	I0421 18:23:41.421949   12353 main.go:141] libmachine: (addons-337450) Calling .DriverName
	I0421 18:23:41.422214   12353 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0421 18:23:41.422241   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHHostname
	I0421 18:23:41.424969   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:41.425342   12353 main.go:141] libmachine: (addons-337450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:47:66", ip: ""} in network mk-addons-337450: {Iface:virbr1 ExpiryTime:2024-04-21 19:22:51 +0000 UTC Type:0 Mac:52:54:00:b4:47:66 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-337450 Clientid:01:52:54:00:b4:47:66}
	I0421 18:23:41.425368   12353 main.go:141] libmachine: (addons-337450) DBG | domain addons-337450 has defined IP address 192.168.39.51 and MAC address 52:54:00:b4:47:66 in network mk-addons-337450
	I0421 18:23:41.425552   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHPort
	I0421 18:23:41.425735   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHKeyPath
	I0421 18:23:41.425910   12353 main.go:141] libmachine: (addons-337450) Calling .GetSSHUsername
	I0421 18:23:41.426050   12353 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/addons-337450/id_rsa Username:docker}
	I0421 18:23:43.086166   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.906249647s)
	I0421 18:23:43.086191   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.840352565s)
	I0421 18:23:43.086224   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086227   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086238   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086241   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086276   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.810546541s)
	I0421 18:23:43.086301   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086304   12353 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.623026687s)
	I0421 18:23:43.086318   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086322   12353 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.623121369s)
	I0421 18:23:43.086362   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.360042892s)
	I0421 18:23:43.086324   12353 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0421 18:23:43.086381   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086391   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086423   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.229423311s)
	I0421 18:23:43.086440   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086451   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086463   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.17324988s)
	I0421 18:23:43.086479   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086488   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086504   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086529   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086535   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086543   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086549   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086551   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.657293307s)
	I0421 18:23:43.086565   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086573   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086692   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.186820688s)
	W0421 18:23:43.086718   12353 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0421 18:23:43.086733   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086735   12353 retry.go:31] will retry after 241.171317ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0421 18:23:43.086761   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086777   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086786   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086789   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086796   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086800   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086804   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086807   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086812   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086824   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.801276028s)
	I0421 18:23:43.086839   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086848   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.086907   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.086929   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.086937   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.086944   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.086952   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.087273   12353 node_ready.go:35] waiting up to 6m0s for node "addons-337450" to be "Ready" ...
	I0421 18:23:43.087387   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087407   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087414   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087430   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.087437   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.087442   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.087445   12353 addons.go:470] Verifying addon ingress=true in "addons-337450"
	I0421 18:23:43.087449   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.087458   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.087465   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.087465   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.087475   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.087483   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.087490   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.090794   12353 out.go:177] * Verifying ingress addon...
	I0421 18:23:43.087514   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.087530   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.088477   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.088501   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.088520   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.088536   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.088558   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.089402   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.089432   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.089450   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.089466   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.089478   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.089493   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.091992   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092001   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092022   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092033   12353 addons.go:470] Verifying addon registry=true in "addons-337450"
	I0421 18:23:43.092037   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092040   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.093389   12353 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-337450 service yakd-dashboard -n yakd-dashboard
	
	I0421 18:23:43.092080   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.092103   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092110   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.092870   12353 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0421 18:23:43.094856   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.096182   12353 out.go:177] * Verifying registry addon...
	I0421 18:23:43.096187   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.097782   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.096382   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.097829   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.096394   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.098024   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.098037   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.098045   12353 addons.go:470] Verifying addon metrics-server=true in "addons-337450"
	I0421 18:23:43.098047   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.098553   12353 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0421 18:23:43.156420   12353 node_ready.go:49] node "addons-337450" has status "Ready":"True"
	I0421 18:23:43.156446   12353 node_ready.go:38] duration metric: took 69.15647ms for node "addons-337450" to be "Ready" ...
	I0421 18:23:43.156455   12353 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:23:43.178622   12353 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0421 18:23:43.178656   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:43.178822   12353 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0421 18:23:43.178846   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:43.234367   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.234396   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.234680   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:43.234728   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.234744   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	W0421 18:23:43.234835   12353 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0421 18:23:43.266513   12353 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:43.308816   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:43.308835   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:43.309121   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:43.309137   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:43.328590   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:23:43.593277   12353 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-337450" context rescaled to 1 replicas
	I0421 18:23:43.612050   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:43.614849   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:44.101274   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:44.103339   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:44.606005   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:44.610657   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:45.038455   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.932041101s)
	I0421 18:23:45.038501   12353 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.616263533s)
	I0421 18:23:45.038516   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.038529   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.040735   12353 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:23:45.038873   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:45.038818   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.042609   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.042624   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.042638   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.044138   12353 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0421 18:23:45.042906   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:45.042914   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.045788   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.045800   12353 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-337450"
	I0421 18:23:45.045837   12353 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0421 18:23:45.045855   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0421 18:23:45.047616   12353 out.go:177] * Verifying csi-hostpath-driver addon...
	I0421 18:23:45.049608   12353 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0421 18:23:45.076238   12353 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0421 18:23:45.076257   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:45.101984   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:45.106819   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:45.163079   12353 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0421 18:23:45.163101   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0421 18:23:45.207937   12353 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0421 18:23:45.207955   12353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0421 18:23:45.287700   12353 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:45.323023   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.994392183s)
	I0421 18:23:45.323075   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.323086   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.323330   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:45.323366   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.323393   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.323407   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:45.323421   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:45.323788   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:45.323808   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:45.373965   12353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0421 18:23:45.556285   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:45.607647   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:45.610148   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:46.054989   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:46.101026   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:46.103456   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:46.578953   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:46.628231   12353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.254223479s)
	I0421 18:23:46.628285   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:46.628303   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:46.628687   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:46.628715   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:46.628726   12353 main.go:141] libmachine: Making call to close driver server
	I0421 18:23:46.628734   12353 main.go:141] libmachine: (addons-337450) Calling .Close
	I0421 18:23:46.628692   12353 main.go:141] libmachine: (addons-337450) DBG | Closing plugin on server side
	I0421 18:23:46.629010   12353 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:23:46.629029   12353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:23:46.630468   12353 addons.go:470] Verifying addon gcp-auth=true in "addons-337450"
	I0421 18:23:46.632027   12353 out.go:177] * Verifying gcp-auth addon...
	I0421 18:23:46.634214   12353 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0421 18:23:46.653243   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:46.653476   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:46.679705   12353 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0421 18:23:46.679724   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:47.057266   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:47.104528   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:47.111483   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:47.149162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:47.555655   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:47.609968   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:47.617750   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:47.638889   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:47.772833   12353 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:48.056025   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:48.100320   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:48.103582   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:48.137414   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:48.563553   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:48.631810   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:48.642616   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:48.652846   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:49.055963   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:49.101577   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:49.103601   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:49.137322   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:49.556050   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:49.601148   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:49.605399   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:49.638185   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:50.055640   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:50.101635   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:50.105183   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:50.138083   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:50.274375   12353 pod_ready.go:102] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:50.559259   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:50.603655   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:50.604108   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:50.638115   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:51.056036   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:51.100281   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:51.103156   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:51.139677   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:51.556605   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:51.601167   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:51.603500   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:51.639007   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:52.055903   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:52.101400   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:52.104181   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:52.139201   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:52.281461   12353 pod_ready.go:97] pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.51 HostIPs:[{IP:192.168.39.
51}] PodIP: PodIPs:[] StartTime:2024-04-21 18:23:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 18:23:36 +0000 UTC,FinishedAt:2024-04-21 18:23:48 +0000 UTC,ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf Started:0xc0021dda50 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 18:23:52.281494   12353 pod_ready.go:81] duration metric: took 9.014957574s for pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace to be "Ready" ...
	E0421 18:23:52.281506   12353 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-b9d65" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 18:23:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.51 HostIPs:[{IP:192.168.39.51}] PodIP: PodIPs:[] StartTime:2024-04-21 18:23:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 18:23:36 +0000 UTC,FinishedAt:2024-04-21 18:23:48 +0000 UTC,ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f634068fdf5664722d04dac7e97d958ac8bbb10ebd1bb280a42ec0d17590d6cf Started:0xc0021dda50 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 18:23:52.281514   12353 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zkbzm" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.288121   12353 pod_ready.go:92] pod "coredns-7db6d8ff4d-zkbzm" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.288144   12353 pod_ready.go:81] duration metric: took 6.620519ms for pod "coredns-7db6d8ff4d-zkbzm" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.288154   12353 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.299535   12353 pod_ready.go:92] pod "etcd-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.299564   12353 pod_ready.go:81] duration metric: took 11.399605ms for pod "etcd-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.299577   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.318918   12353 pod_ready.go:92] pod "kube-apiserver-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.318948   12353 pod_ready.go:81] duration metric: took 19.362263ms for pod "kube-apiserver-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.318962   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.339278   12353 pod_ready.go:92] pod "kube-controller-manager-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.339305   12353 pod_ready.go:81] duration metric: took 20.335162ms for pod "kube-controller-manager-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.339322   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n76l5" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.557143   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:52.603647   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:52.608439   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:52.638493   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:52.683576   12353 pod_ready.go:92] pod "kube-proxy-n76l5" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:52.683609   12353 pod_ready.go:81] duration metric: took 344.278927ms for pod "kube-proxy-n76l5" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:52.683623   12353 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:53.057032   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:53.070604   12353 pod_ready.go:92] pod "kube-scheduler-addons-337450" in "kube-system" namespace has status "Ready":"True"
	I0421 18:23:53.070627   12353 pod_ready.go:81] duration metric: took 386.996836ms for pod "kube-scheduler-addons-337450" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:53.070637   12353 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace to be "Ready" ...
	I0421 18:23:53.102028   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:53.104308   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:53.138617   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:53.556757   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:53.602824   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:53.605065   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:53.637564   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:54.056531   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:54.103001   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:54.103313   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:54.138174   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:54.556835   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:54.603380   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:54.605296   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:54.638817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:55.055187   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:55.077081   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:55.100868   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:55.103208   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:55.138761   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:55.558333   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:55.602972   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:55.604271   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:55.638001   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:56.071185   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:56.101366   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:56.107068   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:56.138688   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:56.558072   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:56.606658   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:56.608056   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:56.637737   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:57.062398   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:57.082900   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:57.102847   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:57.110360   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:57.138462   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:57.556374   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:57.601178   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:57.610030   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:57.639529   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:58.061977   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:58.114349   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:58.120278   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:58.137949   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:58.557836   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:58.600636   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:58.606249   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:58.638252   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:59.059075   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:59.100522   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:59.103777   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:59.138756   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:23:59.556947   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:23:59.577131   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:23:59.600742   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:23:59.607292   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:23:59.638312   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:00.062119   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:00.101224   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:00.105787   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:00.138091   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:00.696097   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:00.698664   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:00.700787   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:00.703569   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:01.056390   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:01.101026   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:01.103737   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:01.138639   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:01.562317   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:01.577182   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:01.603957   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:01.606157   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:01.638422   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:02.055907   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:02.100867   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:02.103270   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:02.137718   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:02.558906   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:02.603322   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:02.607952   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:02.637501   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:03.056331   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:03.101275   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:03.103605   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:03.139200   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:03.555768   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:03.601665   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:03.605366   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:03.638461   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:04.066742   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:04.094939   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:04.108168   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:04.108302   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:04.138320   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:04.555725   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:04.601861   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:04.603109   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:04.638462   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:05.056748   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:05.100501   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:05.105698   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:05.138802   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:05.560041   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:05.601617   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:05.604331   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:05.637777   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:06.056672   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:06.101935   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:06.103956   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:06.138521   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:06.557004   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:06.576773   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:06.602479   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:06.603600   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:06.639188   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:07.055162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:07.100485   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:07.106018   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:07.138774   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:07.557602   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:07.603313   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:07.607438   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:07.638382   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:08.056491   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:08.101266   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:08.104434   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:08.138603   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:08.556413   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:08.578559   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:08.601604   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:08.604211   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:08.640863   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:09.056457   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:09.101266   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:09.104354   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:09.138536   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:09.556493   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:09.601272   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:09.603817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:09.637511   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:10.056371   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:10.101090   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:10.103661   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:10.137883   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:10.557234   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:10.600885   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:10.603637   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:10.638340   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:11.170359   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:11.171402   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:11.171681   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:11.173995   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:11.174817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:11.556200   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:11.606113   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:11.610429   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:11.638644   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:12.058253   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:12.100185   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:12.102513   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:12.138804   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:12.555639   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:12.603472   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:12.605005   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:12.638795   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:13.055405   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:13.101151   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:13.104140   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:13.138939   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:13.560585   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:13.579659   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:13.600630   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:13.603583   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:13.638687   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:14.058191   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:14.104109   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:14.104455   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:14.138582   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:14.555348   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:14.602034   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:14.606171   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:14.638357   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:15.061433   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:15.100554   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:15.103050   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:15.139319   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:15.560992   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:15.602566   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:15.604318   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:15.641753   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:16.054884   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:16.076275   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:16.102141   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:16.104056   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:16.138817   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:16.555789   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:16.603871   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:16.611133   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:16.638005   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:17.060005   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:17.101843   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:17.106462   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:17.139061   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:17.556222   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:17.602563   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:17.611443   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:17.639549   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:18.054927   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:18.076515   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:18.099842   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:18.102652   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:18.137699   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:18.558100   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:18.606548   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:18.607232   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:18.638274   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:19.057178   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:19.102360   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:19.104398   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:19.138595   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:19.557463   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:19.602523   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:19.604091   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:19.638391   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:20.056025   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:20.077401   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:20.101078   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:20.104410   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:20.138746   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:20.571619   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:20.606366   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:20.612716   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:20.638818   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:21.056536   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:21.100768   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:21.104326   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:21.138977   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:21.561862   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:21.602810   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:21.606689   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:21.637848   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:22.060873   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:22.082712   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:22.103166   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:22.117132   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:22.143839   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:22.555669   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:22.606212   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:22.608845   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:22.637881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:23.056270   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:23.100727   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:23.104039   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:23.138450   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:23.566618   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:23.603660   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:23.611460   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:23.638471   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:24.058211   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:24.103055   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:24.105937   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:24.138388   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:24.557875   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:24.576577   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:24.605303   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:24.608546   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:24.638576   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:25.055125   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:25.100696   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:25.104459   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:25.139721   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:25.555792   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:25.603554   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:25.608259   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:25.637927   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:26.055350   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:26.100787   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:26.105783   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:26.138541   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:26.898905   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:26.903269   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:26.906973   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:26.907364   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:26.907377   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:27.061964   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:27.101141   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:27.106127   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:27.138240   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:27.556739   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:27.603375   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:27.605590   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:27.637796   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:28.055612   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:28.101215   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:28.104162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:28.138197   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:28.556131   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:28.601103   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:28.605237   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:28.637922   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:29.060652   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:29.077489   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:29.100491   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:29.103963   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:29.138395   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:29.560694   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:29.601064   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:29.604451   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:29.638722   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:30.055385   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:30.112429   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:30.114583   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:30.139077   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:30.555669   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:30.608285   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:30.609116   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:30.638079   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:31.056026   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:31.078961   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:31.102489   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:31.103915   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:31.138087   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:31.557672   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:31.605874   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:31.606054   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:31.638242   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:32.055758   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:32.101369   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:32.103998   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:32.137937   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:32.554919   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:32.601603   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:32.605429   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:32.643261   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:33.065236   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:33.083526   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:33.100360   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:33.105555   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:33.139671   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:33.559140   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:33.601427   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:33.604603   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:33.637615   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:34.055896   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:34.101556   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:34.106965   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:34.137681   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:34.560000   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:34.601295   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:34.604360   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:34.638615   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:35.057620   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:35.101183   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:35.108510   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:35.138971   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:35.556473   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:35.577554   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:35.600451   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:35.606234   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:35.638694   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:36.055218   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:36.101130   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:36.104945   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:36.138002   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:36.831048   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:36.831653   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:36.833645   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:36.835219   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:37.056345   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:37.101156   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:37.109628   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:37.138656   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:37.560127   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:37.577943   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:37.601947   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:37.605853   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:37.637796   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:38.057331   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:38.100114   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:38.102592   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:38.137600   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:38.556673   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:38.603665   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:38.605713   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:38.637649   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:39.059322   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:39.109702   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:39.110157   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:39.155839   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:39.560687   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:39.586363   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:39.609653   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:39.617843   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:39.639070   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:40.055695   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:40.100096   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:40.102768   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:40.138561   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:40.557329   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:40.600509   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:40.605064   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:40.638028   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:41.064700   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:41.101135   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:41.103923   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:41.138376   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:41.556123   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:41.600813   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:41.608573   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:41.639637   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:42.055592   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:42.076544   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:42.100643   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:42.117983   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:42.137881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:42.556303   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:42.601208   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:42.604977   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:42.638091   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:43.359176   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:43.359898   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:43.361095   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:43.363486   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:43.557369   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:43.607763   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:43.610620   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:43.638476   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:44.056606   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:44.077398   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:44.099953   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:44.102454   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:44.138870   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:44.558944   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:44.611458   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:44.616146   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:24:44.643909   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:45.056028   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:45.100795   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:45.104777   12353 kapi.go:107] duration metric: took 1m2.006225704s to wait for kubernetes.io/minikube-addons=registry ...
	I0421 18:24:45.137779   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:45.554791   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:45.603547   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:45.638441   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:46.058881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:46.103912   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:46.138032   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:46.555643   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:46.577452   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:46.604969   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:46.644037   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:47.057166   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:47.101314   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:47.138118   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:47.559393   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:47.601230   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:47.637410   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:48.063352   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:48.100861   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:48.139045   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:48.557388   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:48.604333   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:48.637987   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:49.055893   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:49.077363   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:49.101914   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:49.138912   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:49.555283   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:49.601981   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:49.638472   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:50.056193   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:50.401603   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:50.402668   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:50.562741   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:50.612712   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:50.645160   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:51.057566   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:51.077713   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:51.100799   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:51.137902   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:51.555384   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:51.601435   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:51.638440   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:52.057268   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:52.103239   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:52.142920   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:52.563534   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:52.600378   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:52.637854   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:53.061913   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:53.087651   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:53.099993   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:53.137546   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:53.556762   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:53.600511   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:53.638317   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:54.055750   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:54.104535   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:54.148080   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:54.556622   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:54.601555   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:54.638617   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:55.066674   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:55.106961   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:55.128529   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:55.150436   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:55.563503   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:55.627496   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:55.654629   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:56.082556   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:56.118485   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:56.140635   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:56.557752   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:56.605754   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:56.639604   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:57.057978   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:57.101312   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:57.139498   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:57.564522   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:57.577735   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:57.600592   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:57.638873   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:58.064376   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:58.102720   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:58.481670   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:58.569881   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:58.601961   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:58.638169   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:59.057213   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:59.105759   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:59.138539   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:24:59.557913   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:24:59.583436   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:24:59.613629   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:24:59.640649   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:00.056945   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:00.102411   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:00.138313   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:00.556148   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:00.600755   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:00.639307   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:01.056402   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:01.100893   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:01.138605   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:01.558577   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:01.612750   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:01.641092   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:02.056710   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:02.083002   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:02.102254   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:02.138786   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:02.555709   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:02.600684   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:02.637964   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:03.078243   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:03.195202   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:03.198137   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:03.559271   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:03.605538   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:03.638645   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:04.056288   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:25:04.099836   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:04.138821   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:04.561791   12353 kapi.go:107] duration metric: took 1m19.512183127s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0421 18:25:04.577287   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:04.601207   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:04.638417   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:05.109557   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:05.139454   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:05.601815   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:05.639047   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:06.101936   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:06.138355   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:06.578953   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:06.603740   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:06.639238   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:07.099911   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:07.137970   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:07.600812   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:07.637945   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:08.101711   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:08.138707   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:08.579850   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:08.604494   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:08.639303   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:09.100505   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:09.139440   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:09.601516   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:09.637542   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:10.101070   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:10.138432   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:10.601127   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:10.638029   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:11.077932   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:11.100720   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:11.138743   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:11.602174   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:11.638689   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:12.101424   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:12.139588   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:12.603119   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:12.639855   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:13.079758   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:13.101420   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:13.138573   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:13.603409   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:13.637812   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:14.100455   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:14.138146   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:14.605928   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:14.638198   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:15.101856   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:15.138748   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:15.578803   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:15.607205   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:15.638962   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:16.100851   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:16.138143   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:16.605271   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:16.638083   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:17.101220   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:17.138738   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:17.602686   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:17.639053   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:18.078777   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:18.100707   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:18.140501   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:18.601567   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:18.638727   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:19.100863   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:19.138231   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:19.601776   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:19.638524   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:20.101433   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:20.138227   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:20.580059   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:20.602708   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:20.639049   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:21.282600   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:21.283262   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:21.607910   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:21.637788   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:22.102247   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:22.138290   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:22.602114   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:22.637696   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:23.077366   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:23.101324   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:23.139476   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:23.602001   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:23.638374   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:24.101323   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:24.138547   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:24.601016   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:24.637972   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:25.101337   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:25.138647   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:25.576950   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:25.602268   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:25.638123   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:26.100959   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:26.137664   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:26.603010   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:26.637732   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:27.100441   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:27.138207   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:27.577917   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:27.602085   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:27.637745   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:28.100802   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:28.139093   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:28.604274   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:28.638602   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:29.101255   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:29.138303   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:29.578245   12353 pod_ready.go:102] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"False"
	I0421 18:25:29.602901   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:29.638905   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:30.103243   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:30.137566   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:30.580206   12353 pod_ready.go:92] pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace has status "Ready":"True"
	I0421 18:25:30.580231   12353 pod_ready.go:81] duration metric: took 1m37.509588555s for pod "metrics-server-c59844bb4-dkrx4" in "kube-system" namespace to be "Ready" ...
	I0421 18:25:30.580241   12353 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hggr8" in "kube-system" namespace to be "Ready" ...
	I0421 18:25:30.588202   12353 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-hggr8" in "kube-system" namespace has status "Ready":"True"
	I0421 18:25:30.588220   12353 pod_ready.go:81] duration metric: took 7.973227ms for pod "nvidia-device-plugin-daemonset-hggr8" in "kube-system" namespace to be "Ready" ...
	I0421 18:25:30.588238   12353 pod_ready.go:38] duration metric: took 1m47.43177281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:25:30.588255   12353 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:25:30.588302   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 18:25:30.588354   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 18:25:30.600757   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:30.638842   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:30.654484   12353 cri.go:89] found id: "fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:30.654505   12353 cri.go:89] found id: ""
	I0421 18:25:30.654515   12353 logs.go:276] 1 containers: [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8]
	I0421 18:25:30.654567   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.661089   12353 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 18:25:30.661171   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 18:25:30.700957   12353 cri.go:89] found id: "dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:30.700975   12353 cri.go:89] found id: ""
	I0421 18:25:30.700982   12353 logs.go:276] 1 containers: [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead]
	I0421 18:25:30.701037   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.705882   12353 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 18:25:30.705957   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 18:25:30.746323   12353 cri.go:89] found id: "5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:30.746345   12353 cri.go:89] found id: ""
	I0421 18:25:30.746354   12353 logs.go:276] 1 containers: [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab]
	I0421 18:25:30.746401   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.751039   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 18:25:30.751112   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 18:25:30.792285   12353 cri.go:89] found id: "eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:30.792308   12353 cri.go:89] found id: ""
	I0421 18:25:30.792327   12353 logs.go:276] 1 containers: [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4]
	I0421 18:25:30.792386   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.796968   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 18:25:30.797021   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 18:25:30.848234   12353 cri.go:89] found id: "7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:30.848259   12353 cri.go:89] found id: ""
	I0421 18:25:30.848269   12353 logs.go:276] 1 containers: [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1]
	I0421 18:25:30.848326   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.853159   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 18:25:30.853223   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 18:25:30.894417   12353 cri.go:89] found id: "78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:30.894443   12353 cri.go:89] found id: ""
	I0421 18:25:30.894452   12353 logs.go:276] 1 containers: [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af]
	I0421 18:25:30.894510   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:30.899109   12353 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 18:25:30.899177   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 18:25:30.938505   12353 cri.go:89] found id: ""
	I0421 18:25:30.938535   12353 logs.go:276] 0 containers: []
	W0421 18:25:30.938545   12353 logs.go:278] No container was found matching "kindnet"
	I0421 18:25:30.938555   12353 logs.go:123] Gathering logs for dmesg ...
	I0421 18:25:30.938568   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 18:25:30.954688   12353 logs.go:123] Gathering logs for describe nodes ...
	I0421 18:25:30.954715   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0421 18:25:31.109571   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:31.128037   12353 logs.go:123] Gathering logs for kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] ...
	I0421 18:25:31.128080   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:31.155811   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:31.213189   12353 logs.go:123] Gathering logs for etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] ...
	I0421 18:25:31.213219   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:31.281895   12353 logs.go:123] Gathering logs for kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] ...
	I0421 18:25:31.281927   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:31.347198   12353 logs.go:123] Gathering logs for CRI-O ...
	I0421 18:25:31.347229   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 18:25:31.602213   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:31.638141   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:32.101596   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:32.138023   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:32.241489   12353 logs.go:123] Gathering logs for container status ...
	I0421 18:25:32.241541   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 18:25:32.310770   12353 logs.go:123] Gathering logs for kubelet ...
	I0421 18:25:32.310798   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0421 18:25:32.365655   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:32.365815   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:32.404266   12353 logs.go:123] Gathering logs for coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] ...
	I0421 18:25:32.404304   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:32.446876   12353 logs.go:123] Gathering logs for kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] ...
	I0421 18:25:32.446900   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:32.491759   12353 logs.go:123] Gathering logs for kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] ...
	I0421 18:25:32.491791   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:32.563813   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:32.563842   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0421 18:25:32.563901   12353 out.go:239] X Problems detected in kubelet:
	W0421 18:25:32.563916   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:32.563927   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:32.563940   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:32.563951   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:25:32.601676   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:32.638572   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:33.101817   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:33.138937   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:33.602086   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:33.637545   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:34.101791   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:34.138808   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:34.602705   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:34.638229   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:35.101378   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:35.138162   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:35.602155   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:35.637319   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:36.101665   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:36.138355   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:36.600889   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:36.638593   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:37.101869   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:37.139149   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:37.601615   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:37.638777   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:38.102675   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:38.138012   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:38.600720   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:38.637882   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:39.102213   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:39.138240   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:39.603636   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:39.638242   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:40.100875   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:40.138486   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:40.601610   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:40.638925   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:41.101500   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:41.138657   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:41.603751   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:41.639353   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:42.102026   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:42.137955   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:42.565265   12353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:25:42.590951   12353 api_server.go:72] duration metric: took 2m9.165125601s to wait for apiserver process to appear ...
	I0421 18:25:42.590982   12353 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:25:42.591020   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 18:25:42.591081   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 18:25:42.601367   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:42.638608   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:42.644189   12353 cri.go:89] found id: "fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:42.644213   12353 cri.go:89] found id: ""
	I0421 18:25:42.644223   12353 logs.go:276] 1 containers: [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8]
	I0421 18:25:42.644286   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.651015   12353 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 18:25:42.651085   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 18:25:42.699231   12353 cri.go:89] found id: "dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:42.699257   12353 cri.go:89] found id: ""
	I0421 18:25:42.699266   12353 logs.go:276] 1 containers: [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead]
	I0421 18:25:42.699313   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.704853   12353 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 18:25:42.704924   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 18:25:42.747617   12353 cri.go:89] found id: "5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:42.747638   12353 cri.go:89] found id: ""
	I0421 18:25:42.747645   12353 logs.go:276] 1 containers: [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab]
	I0421 18:25:42.747688   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.752457   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 18:25:42.752515   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 18:25:42.792807   12353 cri.go:89] found id: "eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:42.792833   12353 cri.go:89] found id: ""
	I0421 18:25:42.792843   12353 logs.go:276] 1 containers: [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4]
	I0421 18:25:42.792903   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.797425   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 18:25:42.797479   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 18:25:42.839251   12353 cri.go:89] found id: "7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:42.839278   12353 cri.go:89] found id: ""
	I0421 18:25:42.839287   12353 logs.go:276] 1 containers: [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1]
	I0421 18:25:42.839349   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.844625   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 18:25:42.844686   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 18:25:42.886572   12353 cri.go:89] found id: "78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:42.886589   12353 cri.go:89] found id: ""
	I0421 18:25:42.886596   12353 logs.go:276] 1 containers: [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af]
	I0421 18:25:42.886642   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:42.892133   12353 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 18:25:42.892204   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 18:25:42.939974   12353 cri.go:89] found id: ""
	I0421 18:25:42.939998   12353 logs.go:276] 0 containers: []
	W0421 18:25:42.940005   12353 logs.go:278] No container was found matching "kindnet"
	I0421 18:25:42.940013   12353 logs.go:123] Gathering logs for etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] ...
	I0421 18:25:42.940024   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:43.007838   12353 logs.go:123] Gathering logs for coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] ...
	I0421 18:25:43.007873   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:43.051522   12353 logs.go:123] Gathering logs for dmesg ...
	I0421 18:25:43.051550   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 18:25:43.071873   12353 logs.go:123] Gathering logs for describe nodes ...
	I0421 18:25:43.071910   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0421 18:25:43.102177   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:43.139138   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:43.208753   12353 logs.go:123] Gathering logs for kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] ...
	I0421 18:25:43.208782   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:43.263934   12353 logs.go:123] Gathering logs for kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] ...
	I0421 18:25:43.263969   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:43.316732   12353 logs.go:123] Gathering logs for kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] ...
	I0421 18:25:43.316764   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:43.362398   12353 logs.go:123] Gathering logs for kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] ...
	I0421 18:25:43.362425   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:43.429062   12353 logs.go:123] Gathering logs for CRI-O ...
	I0421 18:25:43.429096   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 18:25:43.601489   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:43.637867   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:44.101793   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:44.138433   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:44.375733   12353 logs.go:123] Gathering logs for container status ...
	I0421 18:25:44.375770   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 18:25:44.439709   12353 logs.go:123] Gathering logs for kubelet ...
	I0421 18:25:44.439745   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0421 18:25:44.490405   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:44.490565   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:44.537966   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:44.537996   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0421 18:25:44.538045   12353 out.go:239] X Problems detected in kubelet:
	W0421 18:25:44.538053   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:44.538071   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:44.538083   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:44.538089   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:25:44.602452   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:44.638139   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:45.101836   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:45.137880   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:45.602110   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:45.639232   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:46.100758   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:46.138683   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:46.605301   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:46.638082   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:47.101047   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:47.137664   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:47.602183   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:47.638143   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:48.101254   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:48.138476   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:48.602088   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:48.638503   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:49.101282   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:49.137848   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:49.602914   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:49.637859   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:50.101703   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:50.138743   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:50.602138   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:50.639009   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:51.101354   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:51.138314   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:51.600538   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:51.638169   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:52.101888   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:52.137334   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:52.601688   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:52.638469   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:53.102214   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:53.137797   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:53.600996   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:53.637938   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:54.102194   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:54.138264   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:54.538433   12353 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0421 18:25:54.543321   12353 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0421 18:25:54.544406   12353 api_server.go:141] control plane version: v1.30.0
	I0421 18:25:54.544426   12353 api_server.go:131] duration metric: took 11.953437344s to wait for apiserver health ...
	I0421 18:25:54.544434   12353 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:25:54.544454   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 18:25:54.544498   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 18:25:54.588978   12353 cri.go:89] found id: "fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:54.589005   12353 cri.go:89] found id: ""
	I0421 18:25:54.589015   12353 logs.go:276] 1 containers: [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8]
	I0421 18:25:54.589068   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.594941   12353 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 18:25:54.595002   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 18:25:54.600837   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:54.638987   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:54.656136   12353 cri.go:89] found id: "dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:54.656162   12353 cri.go:89] found id: ""
	I0421 18:25:54.656172   12353 logs.go:276] 1 containers: [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead]
	I0421 18:25:54.656219   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.662030   12353 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 18:25:54.662113   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 18:25:54.706766   12353 cri.go:89] found id: "5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:54.706785   12353 cri.go:89] found id: ""
	I0421 18:25:54.706792   12353 logs.go:276] 1 containers: [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab]
	I0421 18:25:54.706842   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.711407   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 18:25:54.711470   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 18:25:54.755558   12353 cri.go:89] found id: "eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:54.755579   12353 cri.go:89] found id: ""
	I0421 18:25:54.755587   12353 logs.go:276] 1 containers: [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4]
	I0421 18:25:54.755646   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.760592   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 18:25:54.760665   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 18:25:54.814929   12353 cri.go:89] found id: "7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:54.814951   12353 cri.go:89] found id: ""
	I0421 18:25:54.814960   12353 logs.go:276] 1 containers: [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1]
	I0421 18:25:54.815010   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.820641   12353 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 18:25:54.820702   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 18:25:54.873830   12353 cri.go:89] found id: "78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:54.873857   12353 cri.go:89] found id: ""
	I0421 18:25:54.873867   12353 logs.go:276] 1 containers: [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af]
	I0421 18:25:54.873933   12353 ssh_runner.go:195] Run: which crictl
	I0421 18:25:54.879042   12353 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 18:25:54.879113   12353 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 18:25:54.924037   12353 cri.go:89] found id: ""
	I0421 18:25:54.924067   12353 logs.go:276] 0 containers: []
	W0421 18:25:54.924075   12353 logs.go:278] No container was found matching "kindnet"
	I0421 18:25:54.924083   12353 logs.go:123] Gathering logs for kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] ...
	I0421 18:25:54.924095   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8"
	I0421 18:25:54.984377   12353 logs.go:123] Gathering logs for CRI-O ...
	I0421 18:25:54.984405   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 18:25:55.102081   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:55.139140   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:55.601698   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:55.638589   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:55.795107   12353 logs.go:123] Gathering logs for dmesg ...
	I0421 18:25:55.795145   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 18:25:55.815458   12353 logs.go:123] Gathering logs for describe nodes ...
	I0421 18:25:55.815485   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0421 18:25:55.941960   12353 logs.go:123] Gathering logs for coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] ...
	I0421 18:25:55.941985   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab"
	I0421 18:25:55.993773   12353 logs.go:123] Gathering logs for kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] ...
	I0421 18:25:55.993797   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4"
	I0421 18:25:56.046574   12353 logs.go:123] Gathering logs for kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] ...
	I0421 18:25:56.046604   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1"
	I0421 18:25:56.095135   12353 logs.go:123] Gathering logs for kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] ...
	I0421 18:25:56.095164   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af"
	I0421 18:25:56.101983   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:56.138255   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:56.164648   12353 logs.go:123] Gathering logs for container status ...
	I0421 18:25:56.164680   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 18:25:56.217362   12353 logs.go:123] Gathering logs for kubelet ...
	I0421 18:25:56.217395   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0421 18:25:56.268048   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:56.268208   12353 logs.go:138] Found kubelet problem: Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:56.308920   12353 logs.go:123] Gathering logs for etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] ...
	I0421 18:25:56.308958   12353 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead"
	I0421 18:25:56.376367   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:56.376401   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0421 18:25:56.376451   12353 out.go:239] X Problems detected in kubelet:
	W0421 18:25:56.376459   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: W0421 18:23:32.972324    1271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	W0421 18:25:56.376466   12353 out.go:239]   Apr 21 18:23:32 addons-337450 kubelet[1271]: E0421 18:23:32.972522    1271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-337450" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-337450' and this object
	I0421 18:25:56.376473   12353 out.go:304] Setting ErrFile to fd 2...
	I0421 18:25:56.376478   12353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:25:56.601348   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:56.638193   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:57.100950   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:57.137856   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:57.601935   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:57.637857   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:58.102373   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:58.138235   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:58.601435   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:58.638615   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:59.101352   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:59.138410   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:25:59.600305   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:25:59.639445   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:00.101485   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:00.138791   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:00.601260   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:00.637627   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:01.101859   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:01.138820   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:01.706023   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:01.707060   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:02.101146   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:02.137784   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:02.601626   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:02.638068   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:03.102285   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:03.138487   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:03.602723   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:03.638382   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:04.102086   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:04.137787   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:04.603033   12353 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:26:04.639703   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:05.102153   12353 kapi.go:107] duration metric: took 2m22.009281538s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0421 18:26:05.138321   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:05.638080   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:06.139134   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:06.389699   12353 system_pods.go:59] 18 kube-system pods found
	I0421 18:26:06.389731   12353 system_pods.go:61] "coredns-7db6d8ff4d-zkbzm" [404bcd18-a121-4e5f-8df6-8caccd78cec0] Running
	I0421 18:26:06.389736   12353 system_pods.go:61] "csi-hostpath-attacher-0" [861f5920-82bc-4203-aca8-d4d87a7fcf8d] Running
	I0421 18:26:06.389740   12353 system_pods.go:61] "csi-hostpath-resizer-0" [999d845e-8fac-4e5b-88d6-e2606bbb46ef] Running
	I0421 18:26:06.389743   12353 system_pods.go:61] "csi-hostpathplugin-g7zc7" [8d43afcc-7206-4031-897b-e27c738195ad] Running
	I0421 18:26:06.389747   12353 system_pods.go:61] "etcd-addons-337450" [d5b644a4-db2a-419c-8757-3ffc986caf95] Running
	I0421 18:26:06.389750   12353 system_pods.go:61] "kube-apiserver-addons-337450" [28de43a5-aabc-40ec-8311-778c57b6bb55] Running
	I0421 18:26:06.389754   12353 system_pods.go:61] "kube-controller-manager-addons-337450" [35e6ad95-2f09-47df-899d-06797c770946] Running
	I0421 18:26:06.389757   12353 system_pods.go:61] "kube-ingress-dns-minikube" [ebf19058-ca7a-4a46-8ce6-71aaac949202] Running
	I0421 18:26:06.389760   12353 system_pods.go:61] "kube-proxy-n76l5" [8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b] Running
	I0421 18:26:06.389763   12353 system_pods.go:61] "kube-scheduler-addons-337450" [171aeef7-e173-4942-b3d5-24070e00a658] Running
	I0421 18:26:06.389771   12353 system_pods.go:61] "metrics-server-c59844bb4-dkrx4" [6b506806-a7ad-4fa2-95ec-c1698f2f93e4] Running
	I0421 18:26:06.389774   12353 system_pods.go:61] "nvidia-device-plugin-daemonset-hggr8" [ab89f680-78cb-478b-929f-acea30c6e4c8] Running
	I0421 18:26:06.389781   12353 system_pods.go:61] "registry-hqdlr" [5295efd0-2d0b-45a9-92f4-12ac59b9f395] Running
	I0421 18:26:06.389784   12353 system_pods.go:61] "registry-proxy-psfhr" [29887109-7168-4513-91b6-e2f7615b03d0] Running
	I0421 18:26:06.389790   12353 system_pods.go:61] "snapshot-controller-745499f584-5plq8" [ba50b3a1-01aa-496b-9a48-e448c9325502] Running
	I0421 18:26:06.389794   12353 system_pods.go:61] "snapshot-controller-745499f584-wdfhr" [36de1d83-5283-4c07-ae6c-fbc01ccfe12d] Running
	I0421 18:26:06.389800   12353 system_pods.go:61] "storage-provisioner" [3eb02dc0-5b10-429a-b88d-90341a248055] Running
	I0421 18:26:06.389804   12353 system_pods.go:61] "tiller-deploy-6677d64bcd-lrdr7" [d0119b9a-443d-45f9-adeb-fc91c36d95a9] Running
	I0421 18:26:06.389812   12353 system_pods.go:74] duration metric: took 11.845372998s to wait for pod list to return data ...
	I0421 18:26:06.389822   12353 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:26:06.393112   12353 default_sa.go:45] found service account: "default"
	I0421 18:26:06.393132   12353 default_sa.go:55] duration metric: took 3.301985ms for default service account to be created ...
	I0421 18:26:06.393140   12353 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:26:06.402775   12353 system_pods.go:86] 18 kube-system pods found
	I0421 18:26:06.402800   12353 system_pods.go:89] "coredns-7db6d8ff4d-zkbzm" [404bcd18-a121-4e5f-8df6-8caccd78cec0] Running
	I0421 18:26:06.402806   12353 system_pods.go:89] "csi-hostpath-attacher-0" [861f5920-82bc-4203-aca8-d4d87a7fcf8d] Running
	I0421 18:26:06.402812   12353 system_pods.go:89] "csi-hostpath-resizer-0" [999d845e-8fac-4e5b-88d6-e2606bbb46ef] Running
	I0421 18:26:06.402819   12353 system_pods.go:89] "csi-hostpathplugin-g7zc7" [8d43afcc-7206-4031-897b-e27c738195ad] Running
	I0421 18:26:06.402828   12353 system_pods.go:89] "etcd-addons-337450" [d5b644a4-db2a-419c-8757-3ffc986caf95] Running
	I0421 18:26:06.402837   12353 system_pods.go:89] "kube-apiserver-addons-337450" [28de43a5-aabc-40ec-8311-778c57b6bb55] Running
	I0421 18:26:06.402845   12353 system_pods.go:89] "kube-controller-manager-addons-337450" [35e6ad95-2f09-47df-899d-06797c770946] Running
	I0421 18:26:06.402855   12353 system_pods.go:89] "kube-ingress-dns-minikube" [ebf19058-ca7a-4a46-8ce6-71aaac949202] Running
	I0421 18:26:06.402864   12353 system_pods.go:89] "kube-proxy-n76l5" [8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b] Running
	I0421 18:26:06.402868   12353 system_pods.go:89] "kube-scheduler-addons-337450" [171aeef7-e173-4942-b3d5-24070e00a658] Running
	I0421 18:26:06.402872   12353 system_pods.go:89] "metrics-server-c59844bb4-dkrx4" [6b506806-a7ad-4fa2-95ec-c1698f2f93e4] Running
	I0421 18:26:06.402879   12353 system_pods.go:89] "nvidia-device-plugin-daemonset-hggr8" [ab89f680-78cb-478b-929f-acea30c6e4c8] Running
	I0421 18:26:06.402884   12353 system_pods.go:89] "registry-hqdlr" [5295efd0-2d0b-45a9-92f4-12ac59b9f395] Running
	I0421 18:26:06.402890   12353 system_pods.go:89] "registry-proxy-psfhr" [29887109-7168-4513-91b6-e2f7615b03d0] Running
	I0421 18:26:06.402894   12353 system_pods.go:89] "snapshot-controller-745499f584-5plq8" [ba50b3a1-01aa-496b-9a48-e448c9325502] Running
	I0421 18:26:06.402901   12353 system_pods.go:89] "snapshot-controller-745499f584-wdfhr" [36de1d83-5283-4c07-ae6c-fbc01ccfe12d] Running
	I0421 18:26:06.402905   12353 system_pods.go:89] "storage-provisioner" [3eb02dc0-5b10-429a-b88d-90341a248055] Running
	I0421 18:26:06.402910   12353 system_pods.go:89] "tiller-deploy-6677d64bcd-lrdr7" [d0119b9a-443d-45f9-adeb-fc91c36d95a9] Running
	I0421 18:26:06.402917   12353 system_pods.go:126] duration metric: took 9.768642ms to wait for k8s-apps to be running ...
	I0421 18:26:06.402929   12353 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:26:06.403008   12353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:26:06.421690   12353 system_svc.go:56] duration metric: took 18.752011ms WaitForService to wait for kubelet
	I0421 18:26:06.421728   12353 kubeadm.go:576] duration metric: took 2m32.995908158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:26:06.421752   12353 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:26:06.425288   12353 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:26:06.425316   12353 node_conditions.go:123] node cpu capacity is 2
	I0421 18:26:06.425327   12353 node_conditions.go:105] duration metric: took 3.571194ms to run NodePressure ...
	I0421 18:26:06.425339   12353 start.go:240] waiting for startup goroutines ...
	I0421 18:26:06.640165   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:07.137588   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:07.638770   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:08.138202   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:08.640874   12353 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:26:09.138418   12353 kapi.go:107] duration metric: took 2m22.504203249s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0421 18:26:09.140513   12353 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-337450 cluster.
	I0421 18:26:09.141962   12353 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0421 18:26:09.143269   12353 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0421 18:26:09.144552   12353 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, helm-tiller, yakd, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0421 18:26:09.146262   12353 addons.go:505] duration metric: took 2m35.720409836s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner helm-tiller yakd inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0421 18:26:09.146298   12353 start.go:245] waiting for cluster config update ...
	I0421 18:26:09.146315   12353 start.go:254] writing updated cluster config ...
	I0421 18:26:09.146535   12353 ssh_runner.go:195] Run: rm -f paused
	I0421 18:26:09.195895   12353 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 18:26:09.197832   12353 out.go:177] * Done! kubectl is now configured to use "addons-337450" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.940809960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d803d1ac-4268-4a29-9da0-fffb6fca4029 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.940895992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d803d1ac-4268-4a29-9da0-fffb6fca4029 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.941168514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f6fed9d955b44c657585d3a33fb07d7eb52c94125560c26a53cef5e0ee2224a,PodSandboxId:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713724137467979868,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,},Annotations:map[string]string{io.kubernetes.container.hash: 70e5b980,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15897eb42c44b9182dfef4488d4ce637b27389da034a66377b768f84702bdca9,PodSandboxId:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713724052153819479,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8b061571,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0d75315e973d8dc52e4249479d5ff2651f07018d9edf37aa5815e9a26b9dd,PodSandboxId:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713723997649613101,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,},Annotations:map[string]string{io.kubernetes.container.hash: 3e3bb7cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855,PodSandboxId:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713723968449303591,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,},Annotations:map[string]string{io.kubernetes.container.hash: fe25901b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9152bdb83e6576a02ae494f93cafb672fa6dfdcb98993cd17c8447af32ba3921,PodSandboxId:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17137
23877172194170,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,},Annotations:map[string]string{io.kubernetes.container.hash: 2f356fec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8,PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713723864116013774,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a08a4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e,PodSandboxId:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713723821362904373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{io.kubernetes.container.hash: 3042b58b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab,PodSandboxId:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713723817664384292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c2e450a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1,PodSan
dboxId:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713723815765116305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,},Annotations:map[string]string{io.kubernetes.container.hash: 2df30c78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4,PodSandboxId:cd1ab8d4ba0849c960eaf88ccc
05cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713723794165220507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead,PodSandboxId:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa4
5129f2cd2394bcb359632,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713723794202839069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,},Annotations:map[string]string{io.kubernetes.container.hash: b0239208,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8,PodSandboxId:e3713418f65a8d392a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713723794178998563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16e1b455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af,PodSandboxId:3717dfc15b103210c35bb167a79f573b307d95ca27843ae3f4365e78c71257f2,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713723794098174915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d803d1ac-4268-4a29-9da0-fffb6fca4029 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.941797398Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd04543a-412c-41eb-804f-6d5030de6bcc name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.942990938Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-4hk7z,Uid:68d248d5-3d1e-4c96-89c8-b2099198c47b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713724133966283460,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:28:53.655492445Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&PodSandboxMetadata{Name:headlamp-7559bf459f-h8lsl,Uid:b58d5b92-2bf6-4e12-b34b-478e60c90c28,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713724046602216575,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,pod-template-hash: 7559bf459f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:27:24.788079156Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&PodSandboxMetadata{Name:nginx,Uid:a9b06fa8-4264-4ab2-90bd-364379ca3429,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723993211307965,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
04-21T18:26:32.892338415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-czh85,Uid:e59829f4-7654-4412-889a-d63beca8a741,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723961922642935,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:23:46.548796692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-drwst,Uid:6b583820-9a1d-4846-ad22-09785b6ab382,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1713723820581008589,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:23:40.250397811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-dkrx4,Uid:6b506806-a7ad-4fa2-95ec-c1698f2f93e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723820157853479,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:23:39.753584367Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3eb02dc0-5b10-429a-b88d-90341a248055,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723819371126638,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-21T18:23:39.065050485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&PodSandboxMetadata{Name:kube-proxy-n76l5,Uid:8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723814820405853,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-
7a16-48e9-8c1c-5ae64aafc80b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:23:32.966714313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zkbzm,Uid:404bcd18-a121-4e5f-8df6-8caccd78cec0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723814108034024,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:23:33.396270656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd1ab8d4ba0849c960eaf88ccc05cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-337450
,Uid:49543dace644eb13c130660897373dfb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723793927377645,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 49543dace644eb13c130660897373dfb,kubernetes.io/config.seen: 2024-04-21T18:23:13.449643690Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3717dfc15b103210c35bb167a79f573b307d95ca27843ae3f4365e78c71257f2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-337450,Uid:c0a2a07965b2e1e45b8c8cb0b1020f88,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723793917534853,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0a2a07965b2e1e45b8c8cb0b1020f88,kubernetes.io/config.seen: 2024-04-21T18:23:13.449642904Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa45129f2cd2394bcb359632,Metadata:&PodSandboxMetadata{Name:etcd-addons-337450,Uid:71744b46392195f166f2a15d0d81962c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723793912404107,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.51:2379,kubernetes.io/config.hash: 71744b46392195f166f2a15d0d81962c,kubernetes.io/config.seen: 2024-04-21T18:23:13.449637114Z,k
ubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e3713418f65a8d392a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-337450,Uid:5530d61ff189911aea99fd0c90aa77d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713723793906277917,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.51:8443,kubernetes.io/config.hash: 5530d61ff189911aea99fd0c90aa77d7,kubernetes.io/config.seen: 2024-04-21T18:23:13.449641551Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dd04543a-412c-41eb-804f-6d5030de6bcc name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.944556368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9de5e114-ed58-41ca-aa0c-bb21d1f62643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.944631060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9de5e114-ed58-41ca-aa0c-bb21d1f62643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.944910737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f6fed9d955b44c657585d3a33fb07d7eb52c94125560c26a53cef5e0ee2224a,PodSandboxId:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713724137467979868,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,},Annotations:map[string]string{io.kubernetes.container.hash: 70e5b980,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15897eb42c44b9182dfef4488d4ce637b27389da034a66377b768f84702bdca9,PodSandboxId:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713724052153819479,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8b061571,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0d75315e973d8dc52e4249479d5ff2651f07018d9edf37aa5815e9a26b9dd,PodSandboxId:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713723997649613101,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,},Annotations:map[string]string{io.kubernetes.container.hash: 3e3bb7cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855,PodSandboxId:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713723968449303591,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,},Annotations:map[string]string{io.kubernetes.container.hash: fe25901b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9152bdb83e6576a02ae494f93cafb672fa6dfdcb98993cd17c8447af32ba3921,PodSandboxId:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17137
23877172194170,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,},Annotations:map[string]string{io.kubernetes.container.hash: 2f356fec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8,PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713723864116013774,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a08a4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e,PodSandboxId:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713723821362904373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{io.kubernetes.container.hash: 3042b58b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab,PodSandboxId:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713723817664384292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c2e450a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1,PodSan
dboxId:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713723815765116305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,},Annotations:map[string]string{io.kubernetes.container.hash: 2df30c78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4,PodSandboxId:cd1ab8d4ba0849c960eaf88ccc
05cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713723794165220507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead,PodSandboxId:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa4
5129f2cd2394bcb359632,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713723794202839069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,},Annotations:map[string]string{io.kubernetes.container.hash: b0239208,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8,PodSandboxId:e3713418f65a8d392a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713723794178998563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16e1b455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af,PodSandboxId:3717dfc15b103210c35bb167a79f573b307d95ca27843ae3f4365e78c71257f2,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713723794098174915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9de5e114-ed58-41ca-aa0c-bb21d1f62643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.956009639Z" level=debug msg="Unmounted container 3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8" file="storage/runtime.go:495" id=aed5d5e5-0ba5-4b78-bbbc-571a985129e7 name=/runtime.v1.RuntimeService/StopContainer
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.974963174Z" level=debug msg="Found exit code for 3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8: 0" file="oci/runtime_oci.go:1022"
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.975311939Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:ab4a08a4 io.kubernetes.container.name:metrics-server io.kubernetes.container.ports:[{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}] io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"ab4a08a4\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"https\\\",\\\"containerPort\\\":4443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.c
ontainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8 io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-04-21T18:24:24.116099191Z io.kubernetes.cri-o.IP.0:10.244.0.9 io.kubernetes.cri-o.Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872 io.kubernetes.cri-o.ImageName:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a io.kubernetes.cri-o.ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62 io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"metrics-server\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-dkrx4\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6b506806-a
7ad-4fa2-95ec-c1698f2f93e4\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-dkrx4_6b506806-a7ad-4fa2-95ec-c1698f2f93e4/metrics-server/0.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/7c615894a59281b25e089ae8af083e4b5c61979c2c2b1afae34c1ee06131c6ee/merged io.kubernetes.cri-o.Name:k8s_metrics-server_metrics-server-c59844bb4-dkrx4_kube-system_6b506806-a7ad-4fa2-95ec-c1698f2f93e4_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890 io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-dkrx4_kube-system_6b506806-a7ad-4fa2-95ec-c1698f2f93e4_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOn
ce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/tmp\",\"host_path\":\"/var/lib/kubelet/pods/6b506806-a7ad-4fa2-95ec-c1698f2f93e4/volumes/kubernetes.io~empty-dir/tmp-dir\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6b506806-a7ad-4fa2-95ec-c1698f2f93e4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6b506806-a7ad-4fa2-95ec-c1698f2f93e4/containers/metrics-server/ef244d79\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6b506806-a7ad-4fa2-95ec-c1698f2f93e4/volumes/kubernetes.io~projected/kube-api-access-hd5jc\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:metrics-server-c59844bb4-dkrx4 io.kubernetes.pod.na
mespace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:6b506806-a7ad-4fa2-95ec-c1698f2f93e4 kubernetes.io/config.seen:2024-04-21T18:23:39.753584367Z kubernetes.io/config.source:api]} Created:2024-04-21 18:24:24.170854386 +0000 UTC Started:2024-04-21 18:24:24.203016452 +0000 UTC m=+80.176920729 Finished:2024-04-21 18:32:01.891455203 +0000 UTC ExitCode:0xc000ee75c0 OOMKilled:false SeccompKilled:false Error: InitPid:4752 InitStartTime:10453 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=aed5d5e5-0ba5-4b78-bbbc-571a985129e7 name=/runtime.v1.RuntimeService/StopContainer
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.979956371Z" level=info msg="Stopped container 3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8: kube-system/metrics-server-c59844bb4-dkrx4/metrics-server" file="server/container_stop.go:29" id=aed5d5e5-0ba5-4b78-bbbc-571a985129e7 name=/runtime.v1.RuntimeService/StopContainer
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.980071055Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=aed5d5e5-0ba5-4b78-bbbc-571a985129e7 name=/runtime.v1.RuntimeService/StopContainer
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.980882667Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,}" file="otel-collector/interceptors.go:62" id=7c26287b-7ecd-4c89-bc3b-ceded0201811 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.980953465Z" level=info msg="Stopping pod sandbox: 3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890" file="server/sandbox_stop.go:18" id=7c26287b-7ecd-4c89-bc3b-ceded0201811 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.981272408Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-dkrx4 Namespace:kube-system ID:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890 UID:6b506806-a7ad-4fa2-95ec-c1698f2f93e4 NetNS:/var/run/netns/b6432113-239b-48b7-a4a9-7b82ee954dd1 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod6b506806-a7ad-4fa2-95ec-c1698f2f93e4 PodAnnotations:0xc0008f6fb8}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.981611465Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-dkrx4 from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	Apr 21 18:32:01 addons-337450 crio[681]: time="2024-04-21 18:32:01.981793249Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8\"" file="server/server.go:805"
	Apr 21 18:32:02 addons-337450 crio[681]: time="2024-04-21 18:32:02.013828152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dc019f5-104e-4869-b30d-582e3fcfc859 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:32:02 addons-337450 crio[681]: time="2024-04-21 18:32:02.013924472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dc019f5-104e-4869-b30d-582e3fcfc859 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:32:02 addons-337450 crio[681]: time="2024-04-21 18:32:02.015874432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bb2d64a-1fac-44c8-8a1f-0b2acb249e07 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:32:02 addons-337450 crio[681]: time="2024-04-21 18:32:02.017581359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713724322017559169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bb2d64a-1fac-44c8-8a1f-0b2acb249e07 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:32:02 addons-337450 crio[681]: time="2024-04-21 18:32:02.018356033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92524ce6-9981-4a63-959c-f0a6f844045e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:02 addons-337450 crio[681]: time="2024-04-21 18:32:02.019664254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92524ce6-9981-4a63-959c-f0a6f844045e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:32:02 addons-337450 crio[681]: time="2024-04-21 18:32:02.020016975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f6fed9d955b44c657585d3a33fb07d7eb52c94125560c26a53cef5e0ee2224a,PodSandboxId:992250a4907efc3226dcce9033d6961f6485905e2487e7b3cf17763907efcc64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713724137467979868,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-4hk7z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68d248d5-3d1e-4c96-89c8-b2099198c47b,},Annotations:map[string]string{io.kubernetes.container.hash: 70e5b980,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15897eb42c44b9182dfef4488d4ce637b27389da034a66377b768f84702bdca9,PodSandboxId:ab211461432926c5db15d1a49a5d1519ce65e7afe1494c83cf7118b67137e3cc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713724052153819479,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-h8lsl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b58d5b92-2bf6-4e12-b34b-478e60c90c28,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8b061571,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea0d75315e973d8dc52e4249479d5ff2651f07018d9edf37aa5815e9a26b9dd,PodSandboxId:7482face30761fe38825de2880d9ea8f4bbb77e25697c098d7bc493255f53d38,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713723997649613101,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: a9b06fa8-4264-4ab2-90bd-364379ca3429,},Annotations:map[string]string{io.kubernetes.container.hash: 3e3bb7cb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855,PodSandboxId:cd3b55e25992218f7889b510734162958eeafbbfd2ab5ab42bfe6bad5b058744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713723968449303591,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-czh85,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e59829f4-7654-4412-889a-d63beca8a741,},Annotations:map[string]string{io.kubernetes.container.hash: fe25901b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9152bdb83e6576a02ae494f93cafb672fa6dfdcb98993cd17c8447af32ba3921,PodSandboxId:cd701ef748b0b58c3459bc7c6c2200fb434a949533b7eba658a6a596dd154589,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17137
23877172194170,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-drwst,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6b583820-9a1d-4846-ad22-09785b6ab382,},Annotations:map[string]string{io.kubernetes.container.hash: 2f356fec,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4bf417432894f9620f522136ac2542106d5702e4789a9ca6d8865b640b5de8,PodSandboxId:3b9d139430d389bdf9e00ff67867648103808dc48cf41d7d49914b78299e7890,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1713723864116013774,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-dkrx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b506806-a7ad-4fa2-95ec-c1698f2f93e4,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a08a4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e,PodSandboxId:c9d65a814e8f215c608f95b8cc683e7506914ca932b8d32eecea5ad2875f6539,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713723821362904373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb02dc0-5b10-429a-b88d-90341a248055,},Annotations:map[string]string{io.kubernetes.container.hash: 3042b58b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab,PodSandboxId:2e9831c6cc61c85cc4d73b2e5916d31b6b421ffbd2ab715c0aba81e42bf1e855,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713723817664384292,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zkbzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 404bcd18-a121-4e5f-8df6-8caccd78cec0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c2e450a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1,PodSand
boxId:e7391d84086029d708135842b9f040a131ea9b648a6eaf9dd1a7792079c63628,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713723815765116305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n76l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a15a3ec-7a16-48e9-8c1c-5ae64aafc80b,},Annotations:map[string]string{io.kubernetes.container.hash: 2df30c78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4,PodSandboxId:cd1ab8d4ba0849c960eaf88ccc0
5cf4c7b995d21622fb2df2119c19d2e084287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713723794165220507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49543dace644eb13c130660897373dfb,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead,PodSandboxId:c188722987ccb52d0a3b59f70bf6ed90d6e4affaaa45
129f2cd2394bcb359632,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713723794202839069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71744b46392195f166f2a15d0d81962c,},Annotations:map[string]string{io.kubernetes.container.hash: b0239208,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8,PodSandboxId:e3713418f65a8d392a61de776d2379075703a76e04b017706297ba25c4d93b49,Metadata:&ContainerMetadat
a{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713723794178998563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530d61ff189911aea99fd0c90aa77d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16e1b455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af,PodSandboxId:3717dfc15b103210c35bb167a79f573b307d95ca27843ae3f4365e78c71257f2,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713723794098174915,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-337450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0a2a07965b2e1e45b8c8cb0b1020f88,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92524ce6-9981-4a63-959c-f0a6f844045e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f6fed9d955b4       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   992250a4907ef       hello-world-app-86c47465fc-4hk7z
	15897eb42c44b       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   4 minutes ago       Running             headlamp                  0                   ab21146143292       headlamp-7559bf459f-h8lsl
	0ea0d75315e97       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                         5 minutes ago       Running             nginx                     0                   7482face30761       nginx
	e45a0527f2fd3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   cd3b55e259922       gcp-auth-5db96cd9b4-czh85
	9152bdb83e657       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   cd701ef748b0b       yakd-dashboard-5ddbf7d777-drwst
	3f4bf41743289       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Exited              metrics-server            0                   3b9d139430d38       metrics-server-c59844bb4-dkrx4
	e5799cfbf50ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   c9d65a814e8f2       storage-provisioner
	5311f7249669f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   2e9831c6cc61c       coredns-7db6d8ff4d-zkbzm
	7be7f865cd8c6       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        8 minutes ago       Running             kube-proxy                0                   e7391d8408602       kube-proxy-n76l5
	dcecdd0d880a4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   c188722987ccb       etcd-addons-337450
	fd969e1dcdde1       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        8 minutes ago       Running             kube-apiserver            0                   e3713418f65a8       kube-apiserver-addons-337450
	eb8bec0fec02d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        8 minutes ago       Running             kube-scheduler            0                   cd1ab8d4ba084       kube-scheduler-addons-337450
	78ac86de1b52b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        8 minutes ago       Running             kube-controller-manager   0                   3717dfc15b103       kube-controller-manager-addons-337450
	
	
	==> coredns [5311f7249669f865482ea294a5877c7ba34b5bf92b9187d77523619ba48f50ab] <==
	[INFO] 10.244.0.7:52815 - 34853 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000623985s
	[INFO] 10.244.0.7:47683 - 41282 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112816s
	[INFO] 10.244.0.7:47683 - 56959 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083333s
	[INFO] 10.244.0.7:50161 - 31101 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106992s
	[INFO] 10.244.0.7:50161 - 16767 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074185s
	[INFO] 10.244.0.7:46753 - 3610 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190288s
	[INFO] 10.244.0.7:46753 - 62747 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092232s
	[INFO] 10.244.0.7:47155 - 48990 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097033s
	[INFO] 10.244.0.7:47155 - 25437 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000029976s
	[INFO] 10.244.0.7:51856 - 64564 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062481s
	[INFO] 10.244.0.7:51856 - 22838 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00002496s
	[INFO] 10.244.0.7:53123 - 2209 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064258s
	[INFO] 10.244.0.7:53123 - 19111 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057529s
	[INFO] 10.244.0.7:52319 - 9503 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000060552s
	[INFO] 10.244.0.7:52319 - 13853 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068872s
	[INFO] 10.244.0.22:50788 - 1817 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000542062s
	[INFO] 10.244.0.22:44024 - 56028 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000088453s
	[INFO] 10.244.0.22:43947 - 23924 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000205108s
	[INFO] 10.244.0.22:56531 - 1441 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128694s
	[INFO] 10.244.0.22:59733 - 39783 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180424s
	[INFO] 10.244.0.22:34618 - 63315 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000439928s
	[INFO] 10.244.0.22:33992 - 49000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003531463s
	[INFO] 10.244.0.22:50067 - 6563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.004789966s
	[INFO] 10.244.0.25:38731 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00078675s
	[INFO] 10.244.0.25:56327 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.001679876s
	
	
	==> describe nodes <==
	Name:               addons-337450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-337450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=addons-337450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_23_20_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-337450
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:23:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-337450
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:31:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:29:27 +0000   Sun, 21 Apr 2024 18:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:29:27 +0000   Sun, 21 Apr 2024 18:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:29:27 +0000   Sun, 21 Apr 2024 18:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:29:27 +0000   Sun, 21 Apr 2024 18:23:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    addons-337450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 56f6a6625bb5472dac6b0ad116cf083d
	  System UUID:                56f6a662-5bb5-472d-ac6b-0ad116cf083d
	  Boot ID:                    70c56614-471a-4691-904a-240bf9e45d25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-4hk7z         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  gcp-auth                    gcp-auth-5db96cd9b4-czh85                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  headlamp                    headlamp-7559bf459f-h8lsl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 coredns-7db6d8ff4d-zkbzm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m29s
	  kube-system                 etcd-addons-337450                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m42s
	  kube-system                 kube-apiserver-addons-337450             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 kube-controller-manager-addons-337450    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 kube-proxy-n76l5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-scheduler-addons-337450             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-drwst          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m49s (x8 over 8m49s)  kubelet          Node addons-337450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s (x8 over 8m49s)  kubelet          Node addons-337450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s (x7 over 8m49s)  kubelet          Node addons-337450 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m43s                  kubelet          Node addons-337450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m43s                  kubelet          Node addons-337450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m43s                  kubelet          Node addons-337450 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m41s                  kubelet          Node addons-337450 status is now: NodeReady
	  Normal  RegisteredNode           8m30s                  node-controller  Node addons-337450 event: Registered Node addons-337450 in Controller
	
	
	==> dmesg <==
	[  +5.097446] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.890642] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.223425] kauditd_printk_skb: 92 callbacks suppressed
	[Apr21 18:24] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.028609] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.001309] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.843631] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.314068] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.008195] kauditd_printk_skb: 52 callbacks suppressed
	[Apr21 18:25] kauditd_printk_skb: 49 callbacks suppressed
	[ +28.608990] kauditd_printk_skb: 24 callbacks suppressed
	[ +26.836037] kauditd_printk_skb: 24 callbacks suppressed
	[Apr21 18:26] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.553523] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.851696] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.648706] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.226919] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.168718] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.088225] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.064923] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.075488] kauditd_printk_skb: 53 callbacks suppressed
	[Apr21 18:27] kauditd_printk_skb: 3 callbacks suppressed
	[ +11.576735] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.859091] kauditd_printk_skb: 24 callbacks suppressed
	[Apr21 18:28] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [dcecdd0d880a48c3e6e6077f0fa454cf5c8bc3ee2fed884538116c366ddb7ead] <==
	{"level":"info","ts":"2024-04-21T18:26:20.734066Z","caller":"traceutil/trace.go:171","msg":"trace[2027573916] linearizableReadLoop","detail":"{readStateIndex:1419; appliedIndex:1418; }","duration":"348.840082ms","start":"2024-04-21T18:26:20.38521Z","end":"2024-04-21T18:26:20.73405Z","steps":["trace[2027573916] 'read index received'  (duration: 348.729763ms)","trace[2027573916] 'applied index is now lower than readState.Index'  (duration: 109.764µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T18:26:20.734288Z","caller":"traceutil/trace.go:171","msg":"trace[310206028] transaction","detail":"{read_only:false; response_revision:1365; number_of_response:1; }","duration":"394.719167ms","start":"2024-04-21T18:26:20.33956Z","end":"2024-04-21T18:26:20.734279Z","steps":["trace[310206028] 'process raft request'  (duration: 394.414437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:20.734407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:20.339543Z","time spent":"394.799936ms","remote":"127.0.0.1:48940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1672,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1628 >> failure:<>"}
	{"level":"warn","ts":"2024-04-21T18:26:20.734786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.550665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T18:26:20.734843Z","caller":"traceutil/trace.go:171","msg":"trace[1589045702] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1365; }","duration":"349.629142ms","start":"2024-04-21T18:26:20.385206Z","end":"2024-04-21T18:26:20.734835Z","steps":["trace[1589045702] 'agreement among raft nodes before linearized reading'  (duration: 349.482943ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:20.734929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:20.385172Z","time spent":"349.747063ms","remote":"127.0.0.1:49132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	{"level":"warn","ts":"2024-04-21T18:26:20.735112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.701267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8120"}
	{"level":"info","ts":"2024-04-21T18:26:20.73516Z","caller":"traceutil/trace.go:171","msg":"trace[231272520] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1365; }","duration":"183.771048ms","start":"2024-04-21T18:26:20.551383Z","end":"2024-04-21T18:26:20.735154Z","steps":["trace[231272520] 'agreement among raft nodes before linearized reading'  (duration: 183.658907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:20.738272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.175196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-04-21T18:26:20.738341Z","caller":"traceutil/trace.go:171","msg":"trace[1755853234] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1365; }","duration":"157.115224ms","start":"2024-04-21T18:26:20.581215Z","end":"2024-04-21T18:26:20.73833Z","steps":["trace[1755853234] 'agreement among raft nodes before linearized reading'  (duration: 154.148728ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:26:35.137036Z","caller":"traceutil/trace.go:171","msg":"trace[1534935221] linearizableReadLoop","detail":"{readStateIndex:1559; appliedIndex:1558; }","duration":"303.101159ms","start":"2024-04-21T18:26:34.833917Z","end":"2024-04-21T18:26:35.137018Z","steps":["trace[1534935221] 'read index received'  (duration: 302.941352ms)","trace[1534935221] 'applied index is now lower than readState.Index'  (duration: 159.218µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T18:26:35.137271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.337088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-84df5799c-hc5t9.17c85ede0e133f63\" ","response":"range_response_count:1 size:794"}
	{"level":"info","ts":"2024-04-21T18:26:35.137334Z","caller":"traceutil/trace.go:171","msg":"trace[686504192] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-controller-84df5799c-hc5t9.17c85ede0e133f63; range_end:; response_count:1; response_revision:1500; }","duration":"303.429976ms","start":"2024-04-21T18:26:34.833892Z","end":"2024-04-21T18:26:35.137322Z","steps":["trace[686504192] 'agreement among raft nodes before linearized reading'  (duration: 303.279689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:35.137493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:34.833879Z","time spent":"303.525957ms","remote":"127.0.0.1:48830","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":1,"response size":818,"request content":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-84df5799c-hc5t9.17c85ede0e133f63\" "}
	{"level":"info","ts":"2024-04-21T18:26:35.137338Z","caller":"traceutil/trace.go:171","msg":"trace[1061000865] transaction","detail":"{read_only:false; response_revision:1500; number_of_response:1; }","duration":"326.553827ms","start":"2024-04-21T18:26:34.810777Z","end":"2024-04-21T18:26:35.137331Z","steps":["trace[1061000865] 'process raft request'  (duration: 326.124986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:26:35.137718Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:26:34.81076Z","time spent":"326.916653ms","remote":"127.0.0.1:49028","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-337450\" mod_revision:1380 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-337450\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-337450\" > >"}
	{"level":"warn","ts":"2024-04-21T18:26:35.137304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.252592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6042"}
	{"level":"info","ts":"2024-04-21T18:26:35.138212Z","caller":"traceutil/trace.go:171","msg":"trace[909335110] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1500; }","duration":"186.186523ms","start":"2024-04-21T18:26:34.952016Z","end":"2024-04-21T18:26:35.138203Z","steps":["trace[909335110] 'agreement among raft nodes before linearized reading'  (duration: 185.222257ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:28:03.009195Z","caller":"traceutil/trace.go:171","msg":"trace[647246773] linearizableReadLoop","detail":"{readStateIndex:1995; appliedIndex:1994; }","duration":"237.163431ms","start":"2024-04-21T18:28:02.771992Z","end":"2024-04-21T18:28:03.009155Z","steps":["trace[647246773] 'read index received'  (duration: 237.003836ms)","trace[647246773] 'applied index is now lower than readState.Index'  (duration: 159.036µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T18:28:03.009543Z","caller":"traceutil/trace.go:171","msg":"trace[984561989] transaction","detail":"{read_only:false; response_revision:1913; number_of_response:1; }","duration":"311.528435ms","start":"2024-04-21T18:28:02.698002Z","end":"2024-04-21T18:28:03.00953Z","steps":["trace[984561989] 'process raft request'  (duration: 311.033507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:28:03.009687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.623749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-04-21T18:28:03.009714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:28:02.697985Z","time spent":"311.626258ms","remote":"127.0.0.1:48936","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1912 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-21T18:28:03.00974Z","caller":"traceutil/trace.go:171","msg":"trace[140255945] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1913; }","duration":"237.790266ms","start":"2024-04-21T18:28:02.771941Z","end":"2024-04-21T18:28:03.009731Z","steps":["trace[140255945] 'agreement among raft nodes before linearized reading'  (duration: 237.622002ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:28:03.009924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.405556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-21T18:28:03.009973Z","caller":"traceutil/trace.go:171","msg":"trace[282287777] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1913; }","duration":"186.484556ms","start":"2024-04-21T18:28:02.823482Z","end":"2024-04-21T18:28:03.009966Z","steps":["trace[282287777] 'agreement among raft nodes before linearized reading'  (duration: 186.422985ms)"],"step_count":1}
	
	
	==> gcp-auth [e45a0527f2fd34172002143eb48d898153300a766786f9ecddea819810366855] <==
	2024/04/21 18:26:14 Ready to write response ...
	2024/04/21 18:26:15 Ready to marshal response ...
	2024/04/21 18:26:15 Ready to write response ...
	2024/04/21 18:26:20 Ready to marshal response ...
	2024/04/21 18:26:20 Ready to write response ...
	2024/04/21 18:26:28 Ready to marshal response ...
	2024/04/21 18:26:28 Ready to write response ...
	2024/04/21 18:26:32 Ready to marshal response ...
	2024/04/21 18:26:32 Ready to write response ...
	2024/04/21 18:26:43 Ready to marshal response ...
	2024/04/21 18:26:43 Ready to write response ...
	2024/04/21 18:26:43 Ready to marshal response ...
	2024/04/21 18:26:43 Ready to write response ...
	2024/04/21 18:26:49 Ready to marshal response ...
	2024/04/21 18:26:49 Ready to write response ...
	2024/04/21 18:26:56 Ready to marshal response ...
	2024/04/21 18:26:56 Ready to write response ...
	2024/04/21 18:27:24 Ready to marshal response ...
	2024/04/21 18:27:24 Ready to write response ...
	2024/04/21 18:27:24 Ready to marshal response ...
	2024/04/21 18:27:24 Ready to write response ...
	2024/04/21 18:27:24 Ready to marshal response ...
	2024/04/21 18:27:24 Ready to write response ...
	2024/04/21 18:28:53 Ready to marshal response ...
	2024/04/21 18:28:53 Ready to write response ...
	
	
	==> kernel <==
	 18:32:02 up 9 min,  0 users,  load average: 0.23, 0.88, 0.66
	Linux addons-337450 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fd969e1dcdde13a1ec683738dd8daccd1d88b07b3b276785cde40d0d962a09d8] <==
	I0421 18:25:30.188862       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0421 18:26:28.662067       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0421 18:26:32.759690       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0421 18:26:32.944739       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.64.189"}
	I0421 18:26:38.223932       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0421 18:26:39.253580       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0421 18:26:59.680395       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0421 18:27:06.398994       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.399041       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.422514       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.422628       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.444547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.444613       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.452507       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.452565       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0421 18:27:06.480191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0421 18:27:06.483544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0421 18:27:07.452774       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0421 18:27:07.484367       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0421 18:27:07.490818       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0421 18:27:12.247113       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0421 18:27:24.718994       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.171.223"}
	I0421 18:28:53.781380       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.230.173"}
	E0421 18:28:56.026341       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0421 18:28:58.788693       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [78ac86de1b52b5c078e9bceb6c218a174082819581adf416940956c1ad9fe9af] <==
	W0421 18:30:01.487098       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:30:01.487158       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:30:10.797191       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:30:10.797354       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:30:31.690924       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:30:31.690966       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:30:32.431578       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:30:32.431676       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:30:44.306394       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:30:44.306556       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:30:58.559157       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:30:58.559257       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:31:12.994612       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:31:12.994786       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:31:15.065823       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:31:15.065880       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:31:18.899404       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:31:18.899629       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:31:49.100914       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:31:49.101130       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:31:49.217283       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:31:49.217391       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0421 18:31:51.808023       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0421 18:31:51.808217       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0421 18:32:00.803118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="12.789µs"
	
	
	==> kube-proxy [7be7f865cd8c60c590235092b0a715cbb1f7a40236e346367461deae36ed3dc1] <==
	I0421 18:23:36.981912       1 server_linux.go:69] "Using iptables proxy"
	I0421 18:23:37.069313       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.51"]
	I0421 18:23:37.195642       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:23:37.195739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:23:37.195757       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:23:37.199921       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:23:37.200100       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:23:37.200137       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:23:37.201402       1 config.go:192] "Starting service config controller"
	I0421 18:23:37.201519       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:23:37.201539       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:23:37.201543       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:23:37.202146       1 config.go:319] "Starting node config controller"
	I0421 18:23:37.202153       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 18:23:37.302545       1 shared_informer.go:320] Caches are synced for node config
	I0421 18:23:37.302596       1 shared_informer.go:320] Caches are synced for service config
	I0421 18:23:37.302624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eb8bec0fec02da70fa1676e482ac1573c80108abe4c474e0710b35040392b1f4] <==
	W0421 18:23:17.036246       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 18:23:17.036285       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 18:23:17.869765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 18:23:17.869826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 18:23:17.902635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:17.902758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:17.952783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 18:23:17.952859       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 18:23:17.957021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:17.957081       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:17.962255       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 18:23:17.962305       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 18:23:17.979957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 18:23:17.980006       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 18:23:18.040221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 18:23:18.040281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 18:23:18.074679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:18.074740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:18.223099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 18:23:18.223136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 18:23:18.254665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 18:23:18.254721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 18:23:18.293021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 18:23:18.293158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0421 18:23:19.726980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.936841    1271 scope.go:117] "RemoveContainer" containerID="89c9751f6703b301cd252b4dd477322bd95653d3badaa3c2df4fb1626dd13db4"
	Apr 21 18:28:59 addons-337450 kubelet[1271]: I0421 18:28:59.944198    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598d7672-fac0-49eb-9531-28ed2743003c" path="/var/lib/kubelet/pods/598d7672-fac0-49eb-9531-28ed2743003c/volumes"
	Apr 21 18:29:19 addons-337450 kubelet[1271]: E0421 18:29:19.976669    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:29:19 addons-337450 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:29:19 addons-337450 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:29:19 addons-337450 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:29:19 addons-337450 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:29:22 addons-337450 kubelet[1271]: I0421 18:29:22.902855    1271 scope.go:117] "RemoveContainer" containerID="9b676bc962d722661287583dd53dbf86bf7f708ef674ef75b3e07a4ece7671d2"
	Apr 21 18:29:22 addons-337450 kubelet[1271]: I0421 18:29:22.925332    1271 scope.go:117] "RemoveContainer" containerID="92e45bc5e2c41d699bfe359cb752d6a3d3e0aab4c5931d681dff0ebc6e407022"
	Apr 21 18:30:19 addons-337450 kubelet[1271]: E0421 18:30:19.975615    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:30:19 addons-337450 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:30:19 addons-337450 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:30:19 addons-337450 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:30:19 addons-337450 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:31:19 addons-337450 kubelet[1271]: E0421 18:31:19.978857    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:31:19 addons-337450 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:31:19 addons-337450 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:31:19 addons-337450 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:31:19 addons-337450 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:32:02 addons-337450 kubelet[1271]: I0421 18:32:02.288361    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd5jc\" (UniqueName: \"kubernetes.io/projected/6b506806-a7ad-4fa2-95ec-c1698f2f93e4-kube-api-access-hd5jc\") pod \"6b506806-a7ad-4fa2-95ec-c1698f2f93e4\" (UID: \"6b506806-a7ad-4fa2-95ec-c1698f2f93e4\") "
	Apr 21 18:32:02 addons-337450 kubelet[1271]: I0421 18:32:02.288536    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b506806-a7ad-4fa2-95ec-c1698f2f93e4-tmp-dir\") pod \"6b506806-a7ad-4fa2-95ec-c1698f2f93e4\" (UID: \"6b506806-a7ad-4fa2-95ec-c1698f2f93e4\") "
	Apr 21 18:32:02 addons-337450 kubelet[1271]: I0421 18:32:02.289834    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b506806-a7ad-4fa2-95ec-c1698f2f93e4-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6b506806-a7ad-4fa2-95ec-c1698f2f93e4" (UID: "6b506806-a7ad-4fa2-95ec-c1698f2f93e4"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 21 18:32:02 addons-337450 kubelet[1271]: I0421 18:32:02.297689    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b506806-a7ad-4fa2-95ec-c1698f2f93e4-kube-api-access-hd5jc" (OuterVolumeSpecName: "kube-api-access-hd5jc") pod "6b506806-a7ad-4fa2-95ec-c1698f2f93e4" (UID: "6b506806-a7ad-4fa2-95ec-c1698f2f93e4"). InnerVolumeSpecName "kube-api-access-hd5jc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 21 18:32:02 addons-337450 kubelet[1271]: I0421 18:32:02.389867    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hd5jc\" (UniqueName: \"kubernetes.io/projected/6b506806-a7ad-4fa2-95ec-c1698f2f93e4-kube-api-access-hd5jc\") on node \"addons-337450\" DevicePath \"\""
	Apr 21 18:32:02 addons-337450 kubelet[1271]: I0421 18:32:02.389898    1271 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b506806-a7ad-4fa2-95ec-c1698f2f93e4-tmp-dir\") on node \"addons-337450\" DevicePath \"\""
	
	
	==> storage-provisioner [e5799cfbf50ab0e763afcc6e182be8e8ed03e773f28c5f8873ba7155e3b4d88e] <==
	I0421 18:23:41.760566       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 18:23:41.784000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 18:23:41.784101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 18:23:41.797856       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 18:23:41.803105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-337450_2c81037f-fda8-484b-be10-f2799d1cde06!
	I0421 18:23:41.804885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9dfcb6b5-e135-4ff0-a13c-99e06c620c2e", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-337450_2c81037f-fda8-484b-be10-f2799d1cde06 became leader
	I0421 18:23:41.903542       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-337450_2c81037f-fda8-484b-be10-f2799d1cde06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-337450 -n addons-337450
helpers_test.go:261: (dbg) Run:  kubectl --context addons-337450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (354.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-337450
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-337450: exit status 82 (2m0.477933773s)

                                                
                                                
-- stdout --
	* Stopping node "addons-337450"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-337450" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-337450
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-337450: exit status 11 (21.525758845s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-337450" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-337450
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-337450: exit status 11 (6.143442427s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-337450" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-337450
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-337450: exit status 11 (6.144178765s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-337450" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 node stop m02 -v=7 --alsologtostderr
E0421 18:46:09.207583   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:46:50.049029   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.503134222s)

                                                
                                                
-- stdout --
	* Stopping node "ha-113226-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:45:44.217471   26523 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:45:44.217738   26523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:45:44.217749   26523 out.go:304] Setting ErrFile to fd 2...
	I0421 18:45:44.217753   26523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:45:44.217988   26523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:45:44.218310   26523 mustload.go:65] Loading cluster: ha-113226
	I0421 18:45:44.218695   26523 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:45:44.218715   26523 stop.go:39] StopHost: ha-113226-m02
	I0421 18:45:44.219097   26523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:45:44.219149   26523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:45:44.236887   26523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0421 18:45:44.237422   26523 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:45:44.238125   26523 main.go:141] libmachine: Using API Version  1
	I0421 18:45:44.238198   26523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:45:44.238852   26523 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:45:44.241097   26523 out.go:177] * Stopping node "ha-113226-m02"  ...
	I0421 18:45:44.242530   26523 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0421 18:45:44.242572   26523 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:45:44.242827   26523 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0421 18:45:44.242872   26523 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:45:44.246604   26523 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:45:44.247023   26523 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:45:44.247062   26523 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:45:44.247254   26523 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:45:44.247477   26523 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:45:44.247678   26523 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:45:44.247828   26523 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:45:44.334143   26523 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0421 18:45:44.389773   26523 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0421 18:45:44.457406   26523 main.go:141] libmachine: Stopping "ha-113226-m02"...
	I0421 18:45:44.457433   26523 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:45:44.458987   26523 main.go:141] libmachine: (ha-113226-m02) Calling .Stop
	I0421 18:45:44.462530   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 0/120
	I0421 18:45:45.464447   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 1/120
	I0421 18:45:46.465698   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 2/120
	I0421 18:45:47.467109   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 3/120
	I0421 18:45:48.468741   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 4/120
	I0421 18:45:49.470701   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 5/120
	I0421 18:45:50.472529   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 6/120
	I0421 18:45:51.473922   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 7/120
	I0421 18:45:52.475108   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 8/120
	I0421 18:45:53.476393   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 9/120
	I0421 18:45:54.478655   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 10/120
	I0421 18:45:55.480518   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 11/120
	I0421 18:45:56.482045   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 12/120
	I0421 18:45:57.483858   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 13/120
	I0421 18:45:58.485518   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 14/120
	I0421 18:45:59.487173   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 15/120
	I0421 18:46:00.488388   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 16/120
	I0421 18:46:01.490688   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 17/120
	I0421 18:46:02.492551   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 18/120
	I0421 18:46:03.494043   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 19/120
	I0421 18:46:04.495966   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 20/120
	I0421 18:46:05.497305   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 21/120
	I0421 18:46:06.498738   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 22/120
	I0421 18:46:07.501206   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 23/120
	I0421 18:46:08.502637   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 24/120
	I0421 18:46:09.504528   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 25/120
	I0421 18:46:10.506990   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 26/120
	I0421 18:46:11.509051   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 27/120
	I0421 18:46:12.510554   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 28/120
	I0421 18:46:13.512866   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 29/120
	I0421 18:46:14.514539   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 30/120
	I0421 18:46:15.516161   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 31/120
	I0421 18:46:16.517573   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 32/120
	I0421 18:46:17.519099   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 33/120
	I0421 18:46:18.520314   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 34/120
	I0421 18:46:19.522476   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 35/120
	I0421 18:46:20.524939   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 36/120
	I0421 18:46:21.526523   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 37/120
	I0421 18:46:22.528508   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 38/120
	I0421 18:46:23.529723   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 39/120
	I0421 18:46:24.531820   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 40/120
	I0421 18:46:25.534036   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 41/120
	I0421 18:46:26.535338   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 42/120
	I0421 18:46:27.537070   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 43/120
	I0421 18:46:28.538360   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 44/120
	I0421 18:46:29.540299   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 45/120
	I0421 18:46:30.541586   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 46/120
	I0421 18:46:31.542939   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 47/120
	I0421 18:46:32.544191   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 48/120
	I0421 18:46:33.546283   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 49/120
	I0421 18:46:34.548094   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 50/120
	I0421 18:46:35.549602   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 51/120
	I0421 18:46:36.550932   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 52/120
	I0421 18:46:37.552622   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 53/120
	I0421 18:46:38.553997   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 54/120
	I0421 18:46:39.555709   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 55/120
	I0421 18:46:40.557089   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 56/120
	I0421 18:46:41.558375   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 57/120
	I0421 18:46:42.560562   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 58/120
	I0421 18:46:43.561889   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 59/120
	I0421 18:46:44.563815   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 60/120
	I0421 18:46:45.565198   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 61/120
	I0421 18:46:46.566723   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 62/120
	I0421 18:46:47.568144   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 63/120
	I0421 18:46:48.569599   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 64/120
	I0421 18:46:49.571269   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 65/120
	I0421 18:46:50.572480   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 66/120
	I0421 18:46:51.573762   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 67/120
	I0421 18:46:52.575116   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 68/120
	I0421 18:46:53.576439   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 69/120
	I0421 18:46:54.578520   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 70/120
	I0421 18:46:55.580074   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 71/120
	I0421 18:46:56.581602   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 72/120
	I0421 18:46:57.582855   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 73/120
	I0421 18:46:58.584170   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 74/120
	I0421 18:46:59.586107   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 75/120
	I0421 18:47:00.587884   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 76/120
	I0421 18:47:01.589762   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 77/120
	I0421 18:47:02.591723   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 78/120
	I0421 18:47:03.593105   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 79/120
	I0421 18:47:04.595459   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 80/120
	I0421 18:47:05.596820   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 81/120
	I0421 18:47:06.598321   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 82/120
	I0421 18:47:07.599662   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 83/120
	I0421 18:47:08.601247   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 84/120
	I0421 18:47:09.603120   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 85/120
	I0421 18:47:10.605045   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 86/120
	I0421 18:47:11.606652   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 87/120
	I0421 18:47:12.608545   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 88/120
	I0421 18:47:13.609804   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 89/120
	I0421 18:47:14.611696   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 90/120
	I0421 18:47:15.613043   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 91/120
	I0421 18:47:16.614385   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 92/120
	I0421 18:47:17.616586   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 93/120
	I0421 18:47:18.617896   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 94/120
	I0421 18:47:19.619874   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 95/120
	I0421 18:47:20.622130   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 96/120
	I0421 18:47:21.624307   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 97/120
	I0421 18:47:22.625513   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 98/120
	I0421 18:47:23.626907   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 99/120
	I0421 18:47:24.629116   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 100/120
	I0421 18:47:25.630830   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 101/120
	I0421 18:47:26.633181   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 102/120
	I0421 18:47:27.634815   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 103/120
	I0421 18:47:28.636798   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 104/120
	I0421 18:47:29.638874   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 105/120
	I0421 18:47:30.640522   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 106/120
	I0421 18:47:31.642052   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 107/120
	I0421 18:47:32.644002   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 108/120
	I0421 18:47:33.645395   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 109/120
	I0421 18:47:34.647642   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 110/120
	I0421 18:47:35.648975   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 111/120
	I0421 18:47:36.651069   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 112/120
	I0421 18:47:37.652814   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 113/120
	I0421 18:47:38.654358   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 114/120
	I0421 18:47:39.655765   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 115/120
	I0421 18:47:40.657094   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 116/120
	I0421 18:47:41.659417   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 117/120
	I0421 18:47:42.661171   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 118/120
	I0421 18:47:43.662871   26523 main.go:141] libmachine: (ha-113226-m02) Waiting for machine to stop 119/120
	I0421 18:47:44.663552   26523 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0421 18:47:44.663708   26523 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-113226 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (19.149793671s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:47:44.723561   26946 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:47:44.723861   26946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:47:44.723872   26946 out.go:304] Setting ErrFile to fd 2...
	I0421 18:47:44.723878   26946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:47:44.724173   26946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:47:44.724387   26946 out.go:298] Setting JSON to false
	I0421 18:47:44.724415   26946 mustload.go:65] Loading cluster: ha-113226
	I0421 18:47:44.724531   26946 notify.go:220] Checking for updates...
	I0421 18:47:44.724959   26946 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:47:44.724977   26946 status.go:255] checking status of ha-113226 ...
	I0421 18:47:44.725545   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:47:44.725627   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:47:44.744078   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I0421 18:47:44.744579   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:47:44.745131   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:47:44.745155   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:47:44.745588   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:47:44.745789   26946 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:47:44.747379   26946 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:47:44.747409   26946 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:47:44.747763   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:47:44.747821   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:47:44.762895   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0421 18:47:44.763349   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:47:44.763772   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:47:44.763791   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:47:44.764171   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:47:44.764363   26946 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:47:44.767026   26946 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:47:44.767382   26946 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:47:44.767413   26946 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:47:44.767516   26946 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:47:44.767783   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:47:44.767817   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:47:44.782712   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0421 18:47:44.783183   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:47:44.783764   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:47:44.783787   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:47:44.784145   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:47:44.784356   26946 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:47:44.784553   26946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:47:44.784594   26946 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:47:44.787562   26946 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:47:44.787985   26946 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:47:44.788006   26946 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:47:44.788197   26946 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:47:44.788359   26946 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:47:44.788524   26946 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:47:44.788669   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:47:44.877225   26946 ssh_runner.go:195] Run: systemctl --version
	I0421 18:47:44.885490   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:47:44.904193   26946 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:47:44.904222   26946 api_server.go:166] Checking apiserver status ...
	I0421 18:47:44.904268   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:47:44.922845   26946 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:47:44.934486   26946 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:47:44.934537   26946 ssh_runner.go:195] Run: ls
	I0421 18:47:44.939765   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:47:44.947048   26946 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:47:44.947080   26946 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:47:44.947093   26946 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:47:44.947130   26946 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:47:44.947591   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:47:44.947621   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:47:44.962824   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0421 18:47:44.963183   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:47:44.963723   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:47:44.963745   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:47:44.964067   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:47:44.964268   26946 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:47:44.965854   26946 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:47:44.965869   26946 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:47:44.966151   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:47:44.966186   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:47:44.981354   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0421 18:47:44.981734   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:47:44.982186   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:47:44.982211   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:47:44.982544   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:47:44.982741   26946 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:47:44.985366   26946 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:47:44.985789   26946 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:47:44.985818   26946 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:47:44.985981   26946 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:47:44.986304   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:47:44.986340   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:47:45.000679   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I0421 18:47:45.001024   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:47:45.001459   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:47:45.001480   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:47:45.001790   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:47:45.001990   26946 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:47:45.002198   26946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:47:45.002224   26946 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:47:45.004681   26946 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:47:45.005119   26946 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:47:45.005151   26946 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:47:45.005278   26946 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:47:45.005449   26946 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:47:45.005599   26946 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:47:45.005707   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:03.426260   26946 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:03.426384   26946 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0421 18:48:03.426408   26946 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:03.426418   26946 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:48:03.426442   26946 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:03.426452   26946 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:03.426883   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:03.426933   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:03.443046   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33671
	I0421 18:48:03.443481   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:03.443992   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:48:03.444015   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:03.444335   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:03.444565   26946 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:03.446172   26946 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:03.446193   26946 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:03.446548   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:03.446602   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:03.462198   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0421 18:48:03.462672   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:03.463133   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:48:03.463151   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:03.463469   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:03.463690   26946 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:03.466491   26946 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:03.466935   26946 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:03.466952   26946 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:03.467100   26946 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:03.467378   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:03.467414   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:03.482612   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0421 18:48:03.483010   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:03.483440   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:48:03.483459   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:03.483749   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:03.483920   26946 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:03.484097   26946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:03.484121   26946 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:03.486794   26946 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:03.487195   26946 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:03.487217   26946 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:03.487408   26946 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:03.487590   26946 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:03.487764   26946 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:03.487869   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:03.571935   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:03.592547   26946 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:03.592575   26946 api_server.go:166] Checking apiserver status ...
	I0421 18:48:03.592609   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:03.612341   26946 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:03.626685   26946 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:03.626736   26946 ssh_runner.go:195] Run: ls
	I0421 18:48:03.632075   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:03.636483   26946 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:03.636508   26946 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:03.636517   26946 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:03.636531   26946 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:03.636837   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:03.636877   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:03.652643   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0421 18:48:03.653016   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:03.653492   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:48:03.653517   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:03.653843   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:03.654039   26946 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:03.655487   26946 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:03.655536   26946 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:03.655869   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:03.655902   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:03.672182   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0421 18:48:03.672658   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:03.673110   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:48:03.673134   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:03.673455   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:03.673619   26946 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:03.676524   26946 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:03.676928   26946 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:03.676965   26946 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:03.677043   26946 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:03.677439   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:03.677484   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:03.693085   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0421 18:48:03.693482   26946 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:03.693934   26946 main.go:141] libmachine: Using API Version  1
	I0421 18:48:03.693957   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:03.694282   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:03.694489   26946 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:03.694675   26946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:03.694693   26946 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:03.697816   26946 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:03.698272   26946 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:03.698300   26946 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:03.698485   26946 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:03.698674   26946 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:03.698825   26946 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:03.698971   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:03.792828   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:03.813174   26946 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-113226 -n ha-113226
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-113226 logs -n 25: (1.520044461s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226:/home/docker/cp-test_ha-113226-m03_ha-113226.txt                       |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226 sudo cat                                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226.txt                                 |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m02:/home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m04 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp testdata/cp-test.txt                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226:/home/docker/cp-test_ha-113226-m04_ha-113226.txt                       |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226 sudo cat                                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226.txt                                 |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m02:/home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03:/home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m03 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-113226 node stop m02 -v=7                                                     | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:40:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:40:11.351426   22327 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:40:11.351551   22327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:40:11.351560   22327 out.go:304] Setting ErrFile to fd 2...
	I0421 18:40:11.351564   22327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:40:11.351730   22327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:40:11.352359   22327 out.go:298] Setting JSON to false
	I0421 18:40:11.353185   22327 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1309,"bootTime":1713723502,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:40:11.353252   22327 start.go:139] virtualization: kvm guest
	I0421 18:40:11.355621   22327 out.go:177] * [ha-113226] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:40:11.357129   22327 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:40:11.358411   22327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:40:11.357131   22327 notify.go:220] Checking for updates...
	I0421 18:40:11.361001   22327 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:40:11.362403   22327 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:40:11.363762   22327 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:40:11.365007   22327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:40:11.366390   22327 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:40:11.401544   22327 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 18:40:11.402902   22327 start.go:297] selected driver: kvm2
	I0421 18:40:11.402917   22327 start.go:901] validating driver "kvm2" against <nil>
	I0421 18:40:11.402936   22327 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:40:11.403588   22327 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:40:11.403667   22327 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:40:11.418878   22327 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:40:11.418949   22327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:40:11.419148   22327 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:40:11.419193   22327 cni.go:84] Creating CNI manager for ""
	I0421 18:40:11.419205   22327 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0421 18:40:11.419209   22327 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0421 18:40:11.419261   22327 start.go:340] cluster config:
	{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0421 18:40:11.419383   22327 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:40:11.422109   22327 out.go:177] * Starting "ha-113226" primary control-plane node in "ha-113226" cluster
	I0421 18:40:11.423272   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:40:11.423313   22327 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:40:11.423327   22327 cache.go:56] Caching tarball of preloaded images
	I0421 18:40:11.423409   22327 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:40:11.423421   22327 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:40:11.423718   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:40:11.423751   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json: {Name:mk8f2789a9447c7baf30689bce1ddb3bc9f26118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:11.423891   22327 start.go:360] acquireMachinesLock for ha-113226: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:40:11.423927   22327 start.go:364] duration metric: took 20.889µs to acquireMachinesLock for "ha-113226"
	I0421 18:40:11.423947   22327 start.go:93] Provisioning new machine with config: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-113226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:40:11.424007   22327 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 18:40:11.425533   22327 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 18:40:11.425658   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:40:11.425700   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:40:11.439802   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I0421 18:40:11.440237   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:40:11.440820   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:40:11.440843   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:40:11.441206   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:40:11.441387   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:11.441534   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:11.441739   22327 start.go:159] libmachine.API.Create for "ha-113226" (driver="kvm2")
	I0421 18:40:11.441771   22327 client.go:168] LocalClient.Create starting
	I0421 18:40:11.441800   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:40:11.441836   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:40:11.441853   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:40:11.441903   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:40:11.441924   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:40:11.441936   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:40:11.441952   22327 main.go:141] libmachine: Running pre-create checks...
	I0421 18:40:11.441962   22327 main.go:141] libmachine: (ha-113226) Calling .PreCreateCheck
	I0421 18:40:11.442321   22327 main.go:141] libmachine: (ha-113226) Calling .GetConfigRaw
	I0421 18:40:11.442715   22327 main.go:141] libmachine: Creating machine...
	I0421 18:40:11.442730   22327 main.go:141] libmachine: (ha-113226) Calling .Create
	I0421 18:40:11.442851   22327 main.go:141] libmachine: (ha-113226) Creating KVM machine...
	I0421 18:40:11.443954   22327 main.go:141] libmachine: (ha-113226) DBG | found existing default KVM network
	I0421 18:40:11.444608   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.444443   22350 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0421 18:40:11.444634   22327 main.go:141] libmachine: (ha-113226) DBG | created network xml: 
	I0421 18:40:11.444652   22327 main.go:141] libmachine: (ha-113226) DBG | <network>
	I0421 18:40:11.444667   22327 main.go:141] libmachine: (ha-113226) DBG |   <name>mk-ha-113226</name>
	I0421 18:40:11.444680   22327 main.go:141] libmachine: (ha-113226) DBG |   <dns enable='no'/>
	I0421 18:40:11.444688   22327 main.go:141] libmachine: (ha-113226) DBG |   
	I0421 18:40:11.444695   22327 main.go:141] libmachine: (ha-113226) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0421 18:40:11.444702   22327 main.go:141] libmachine: (ha-113226) DBG |     <dhcp>
	I0421 18:40:11.444708   22327 main.go:141] libmachine: (ha-113226) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0421 18:40:11.444716   22327 main.go:141] libmachine: (ha-113226) DBG |     </dhcp>
	I0421 18:40:11.444728   22327 main.go:141] libmachine: (ha-113226) DBG |   </ip>
	I0421 18:40:11.444735   22327 main.go:141] libmachine: (ha-113226) DBG |   
	I0421 18:40:11.444740   22327 main.go:141] libmachine: (ha-113226) DBG | </network>
	I0421 18:40:11.444743   22327 main.go:141] libmachine: (ha-113226) DBG | 
	I0421 18:40:11.449847   22327 main.go:141] libmachine: (ha-113226) DBG | trying to create private KVM network mk-ha-113226 192.168.39.0/24...
	I0421 18:40:11.515066   22327 main.go:141] libmachine: (ha-113226) DBG | private KVM network mk-ha-113226 192.168.39.0/24 created
	I0421 18:40:11.515127   22327 main.go:141] libmachine: (ha-113226) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226 ...
	I0421 18:40:11.515158   22327 main.go:141] libmachine: (ha-113226) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:40:11.515171   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.515046   22350 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:40:11.515229   22327 main.go:141] libmachine: (ha-113226) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:40:11.742006   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.741846   22350 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa...
	I0421 18:40:11.783726   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.783582   22350 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/ha-113226.rawdisk...
	I0421 18:40:11.783761   22327 main.go:141] libmachine: (ha-113226) DBG | Writing magic tar header
	I0421 18:40:11.783772   22327 main.go:141] libmachine: (ha-113226) DBG | Writing SSH key tar header
	I0421 18:40:11.783788   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.783694   22350 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226 ...
	I0421 18:40:11.783821   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226 (perms=drwx------)
	I0421 18:40:11.783843   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226
	I0421 18:40:11.783856   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:40:11.783878   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:40:11.783899   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:40:11.783908   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:40:11.783915   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:40:11.783938   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:40:11.783947   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:40:11.783957   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:40:11.783967   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:40:11.783980   22327 main.go:141] libmachine: (ha-113226) Creating domain...
	I0421 18:40:11.783990   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:40:11.783994   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home
	I0421 18:40:11.784000   22327 main.go:141] libmachine: (ha-113226) DBG | Skipping /home - not owner
	I0421 18:40:11.784997   22327 main.go:141] libmachine: (ha-113226) define libvirt domain using xml: 
	I0421 18:40:11.785031   22327 main.go:141] libmachine: (ha-113226) <domain type='kvm'>
	I0421 18:40:11.785041   22327 main.go:141] libmachine: (ha-113226)   <name>ha-113226</name>
	I0421 18:40:11.785054   22327 main.go:141] libmachine: (ha-113226)   <memory unit='MiB'>2200</memory>
	I0421 18:40:11.785070   22327 main.go:141] libmachine: (ha-113226)   <vcpu>2</vcpu>
	I0421 18:40:11.785081   22327 main.go:141] libmachine: (ha-113226)   <features>
	I0421 18:40:11.785095   22327 main.go:141] libmachine: (ha-113226)     <acpi/>
	I0421 18:40:11.785106   22327 main.go:141] libmachine: (ha-113226)     <apic/>
	I0421 18:40:11.785129   22327 main.go:141] libmachine: (ha-113226)     <pae/>
	I0421 18:40:11.785160   22327 main.go:141] libmachine: (ha-113226)     
	I0421 18:40:11.785176   22327 main.go:141] libmachine: (ha-113226)   </features>
	I0421 18:40:11.785187   22327 main.go:141] libmachine: (ha-113226)   <cpu mode='host-passthrough'>
	I0421 18:40:11.785199   22327 main.go:141] libmachine: (ha-113226)   
	I0421 18:40:11.785211   22327 main.go:141] libmachine: (ha-113226)   </cpu>
	I0421 18:40:11.785223   22327 main.go:141] libmachine: (ha-113226)   <os>
	I0421 18:40:11.785239   22327 main.go:141] libmachine: (ha-113226)     <type>hvm</type>
	I0421 18:40:11.785253   22327 main.go:141] libmachine: (ha-113226)     <boot dev='cdrom'/>
	I0421 18:40:11.785262   22327 main.go:141] libmachine: (ha-113226)     <boot dev='hd'/>
	I0421 18:40:11.785276   22327 main.go:141] libmachine: (ha-113226)     <bootmenu enable='no'/>
	I0421 18:40:11.785287   22327 main.go:141] libmachine: (ha-113226)   </os>
	I0421 18:40:11.785301   22327 main.go:141] libmachine: (ha-113226)   <devices>
	I0421 18:40:11.785321   22327 main.go:141] libmachine: (ha-113226)     <disk type='file' device='cdrom'>
	I0421 18:40:11.785339   22327 main.go:141] libmachine: (ha-113226)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/boot2docker.iso'/>
	I0421 18:40:11.785352   22327 main.go:141] libmachine: (ha-113226)       <target dev='hdc' bus='scsi'/>
	I0421 18:40:11.785365   22327 main.go:141] libmachine: (ha-113226)       <readonly/>
	I0421 18:40:11.785373   22327 main.go:141] libmachine: (ha-113226)     </disk>
	I0421 18:40:11.785408   22327 main.go:141] libmachine: (ha-113226)     <disk type='file' device='disk'>
	I0421 18:40:11.785437   22327 main.go:141] libmachine: (ha-113226)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:40:11.785463   22327 main.go:141] libmachine: (ha-113226)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/ha-113226.rawdisk'/>
	I0421 18:40:11.785480   22327 main.go:141] libmachine: (ha-113226)       <target dev='hda' bus='virtio'/>
	I0421 18:40:11.785496   22327 main.go:141] libmachine: (ha-113226)     </disk>
	I0421 18:40:11.785516   22327 main.go:141] libmachine: (ha-113226)     <interface type='network'>
	I0421 18:40:11.785532   22327 main.go:141] libmachine: (ha-113226)       <source network='mk-ha-113226'/>
	I0421 18:40:11.785545   22327 main.go:141] libmachine: (ha-113226)       <model type='virtio'/>
	I0421 18:40:11.785557   22327 main.go:141] libmachine: (ha-113226)     </interface>
	I0421 18:40:11.785569   22327 main.go:141] libmachine: (ha-113226)     <interface type='network'>
	I0421 18:40:11.785583   22327 main.go:141] libmachine: (ha-113226)       <source network='default'/>
	I0421 18:40:11.785591   22327 main.go:141] libmachine: (ha-113226)       <model type='virtio'/>
	I0421 18:40:11.785604   22327 main.go:141] libmachine: (ha-113226)     </interface>
	I0421 18:40:11.785615   22327 main.go:141] libmachine: (ha-113226)     <serial type='pty'>
	I0421 18:40:11.785628   22327 main.go:141] libmachine: (ha-113226)       <target port='0'/>
	I0421 18:40:11.785640   22327 main.go:141] libmachine: (ha-113226)     </serial>
	I0421 18:40:11.785655   22327 main.go:141] libmachine: (ha-113226)     <console type='pty'>
	I0421 18:40:11.785670   22327 main.go:141] libmachine: (ha-113226)       <target type='serial' port='0'/>
	I0421 18:40:11.785700   22327 main.go:141] libmachine: (ha-113226)     </console>
	I0421 18:40:11.785711   22327 main.go:141] libmachine: (ha-113226)     <rng model='virtio'>
	I0421 18:40:11.785721   22327 main.go:141] libmachine: (ha-113226)       <backend model='random'>/dev/random</backend>
	I0421 18:40:11.785731   22327 main.go:141] libmachine: (ha-113226)     </rng>
	I0421 18:40:11.785740   22327 main.go:141] libmachine: (ha-113226)     
	I0421 18:40:11.785751   22327 main.go:141] libmachine: (ha-113226)     
	I0421 18:40:11.785761   22327 main.go:141] libmachine: (ha-113226)   </devices>
	I0421 18:40:11.785771   22327 main.go:141] libmachine: (ha-113226) </domain>
	I0421 18:40:11.785782   22327 main.go:141] libmachine: (ha-113226) 
	I0421 18:40:11.790191   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:b2:e7:b7 in network default
	I0421 18:40:11.790759   22327 main.go:141] libmachine: (ha-113226) Ensuring networks are active...
	I0421 18:40:11.790775   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:11.791527   22327 main.go:141] libmachine: (ha-113226) Ensuring network default is active
	I0421 18:40:11.791904   22327 main.go:141] libmachine: (ha-113226) Ensuring network mk-ha-113226 is active
	I0421 18:40:11.792401   22327 main.go:141] libmachine: (ha-113226) Getting domain xml...
	I0421 18:40:11.793172   22327 main.go:141] libmachine: (ha-113226) Creating domain...
	I0421 18:40:12.949988   22327 main.go:141] libmachine: (ha-113226) Waiting to get IP...
	I0421 18:40:12.950927   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:12.951330   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:12.951385   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:12.951324   22350 retry.go:31] will retry after 257.738769ms: waiting for machine to come up
	I0421 18:40:13.210794   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:13.211372   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:13.211397   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:13.211345   22350 retry.go:31] will retry after 336.916795ms: waiting for machine to come up
	I0421 18:40:13.549746   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:13.550237   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:13.550264   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:13.550201   22350 retry.go:31] will retry after 322.471756ms: waiting for machine to come up
	I0421 18:40:13.874629   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:13.874924   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:13.874949   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:13.874888   22350 retry.go:31] will retry after 550.724254ms: waiting for machine to come up
	I0421 18:40:14.427502   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:14.427860   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:14.427888   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:14.427830   22350 retry.go:31] will retry after 539.109512ms: waiting for machine to come up
	I0421 18:40:14.968465   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:14.968850   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:14.968878   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:14.968802   22350 retry.go:31] will retry after 902.697901ms: waiting for machine to come up
	I0421 18:40:15.872823   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:15.873140   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:15.873165   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:15.873103   22350 retry.go:31] will retry after 1.015120461s: waiting for machine to come up
	I0421 18:40:16.889857   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:16.890283   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:16.890349   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:16.890220   22350 retry.go:31] will retry after 915.582708ms: waiting for machine to come up
	I0421 18:40:17.807314   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:17.807737   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:17.807767   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:17.807692   22350 retry.go:31] will retry after 1.649437086s: waiting for machine to come up
	I0421 18:40:19.459400   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:19.459862   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:19.459903   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:19.459840   22350 retry.go:31] will retry after 1.425571352s: waiting for machine to come up
	I0421 18:40:20.887632   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:20.888135   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:20.888163   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:20.888078   22350 retry.go:31] will retry after 2.416069759s: waiting for machine to come up
	I0421 18:40:23.306941   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:23.307438   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:23.307467   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:23.307379   22350 retry.go:31] will retry after 3.062699154s: waiting for machine to come up
	I0421 18:40:26.373602   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:26.374091   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:26.374119   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:26.374026   22350 retry.go:31] will retry after 2.866180298s: waiting for machine to come up
	I0421 18:40:29.243335   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:29.243653   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:29.243673   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:29.243627   22350 retry.go:31] will retry after 4.19991653s: waiting for machine to come up
	I0421 18:40:33.445893   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.446318   22327 main.go:141] libmachine: (ha-113226) Found IP for machine: 192.168.39.60
	I0421 18:40:33.446339   22327 main.go:141] libmachine: (ha-113226) Reserving static IP address...
	I0421 18:40:33.446352   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has current primary IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.446743   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find host DHCP lease matching {name: "ha-113226", mac: "52:54:00:3d:6a:b5", ip: "192.168.39.60"} in network mk-ha-113226
	I0421 18:40:33.518856   22327 main.go:141] libmachine: (ha-113226) Reserved static IP address: 192.168.39.60
	I0421 18:40:33.518886   22327 main.go:141] libmachine: (ha-113226) Waiting for SSH to be available...
	I0421 18:40:33.518896   22327 main.go:141] libmachine: (ha-113226) DBG | Getting to WaitForSSH function...
	I0421 18:40:33.521267   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.521649   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.521673   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.521807   22327 main.go:141] libmachine: (ha-113226) DBG | Using SSH client type: external
	I0421 18:40:33.521838   22327 main.go:141] libmachine: (ha-113226) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa (-rw-------)
	I0421 18:40:33.521881   22327 main.go:141] libmachine: (ha-113226) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:40:33.521895   22327 main.go:141] libmachine: (ha-113226) DBG | About to run SSH command:
	I0421 18:40:33.521910   22327 main.go:141] libmachine: (ha-113226) DBG | exit 0
	I0421 18:40:33.646269   22327 main.go:141] libmachine: (ha-113226) DBG | SSH cmd err, output: <nil>: 
	I0421 18:40:33.646507   22327 main.go:141] libmachine: (ha-113226) KVM machine creation complete!
	I0421 18:40:33.646891   22327 main.go:141] libmachine: (ha-113226) Calling .GetConfigRaw
	I0421 18:40:33.647436   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:33.647636   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:33.647815   22327 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:40:33.647830   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:40:33.649157   22327 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:40:33.649170   22327 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:40:33.649188   22327 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:40:33.649194   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.651550   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.651994   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.652032   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.652100   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.652297   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.652451   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.652614   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.652815   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.653005   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.653017   22327 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:40:33.757953   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:40:33.757979   22327 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:40:33.757990   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.760834   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.761177   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.761209   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.761318   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.761507   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.761747   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.761901   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.762083   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.762248   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.762260   22327 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:40:33.867828   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:40:33.867919   22327 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:40:33.867931   22327 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:40:33.867938   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:33.868182   22327 buildroot.go:166] provisioning hostname "ha-113226"
	I0421 18:40:33.868203   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:33.868377   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.871038   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.871474   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.871506   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.871641   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.871883   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.872039   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.872176   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.872396   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.872590   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.872606   22327 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226 && echo "ha-113226" | sudo tee /etc/hostname
	I0421 18:40:33.995180   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226
	
	I0421 18:40:33.995211   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.998164   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.998531   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.998558   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.998803   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.999019   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.999196   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.999322   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.999479   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.999655   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.999670   22327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:40:34.112364   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:40:34.112397   22327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:40:34.112421   22327 buildroot.go:174] setting up certificates
	I0421 18:40:34.112433   22327 provision.go:84] configureAuth start
	I0421 18:40:34.112444   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:34.112719   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:34.115630   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.116089   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.116116   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.116265   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.118481   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.118840   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.118888   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.118977   22327 provision.go:143] copyHostCerts
	I0421 18:40:34.119021   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:40:34.119052   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:40:34.119061   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:40:34.119135   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:40:34.119256   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:40:34.119283   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:40:34.119293   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:40:34.119330   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:40:34.119438   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:40:34.119473   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:40:34.119482   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:40:34.119517   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:40:34.119595   22327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226 san=[127.0.0.1 192.168.39.60 ha-113226 localhost minikube]
	I0421 18:40:34.256665   22327 provision.go:177] copyRemoteCerts
	I0421 18:40:34.256715   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:40:34.256734   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.259197   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.259480   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.259508   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.259721   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.259926   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.260066   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.260208   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:34.346033   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:40:34.346120   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:40:34.373930   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:40:34.374008   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0421 18:40:34.401211   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:40:34.401283   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 18:40:34.427360   22327 provision.go:87] duration metric: took 314.915519ms to configureAuth
	I0421 18:40:34.427382   22327 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:40:34.427550   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:40:34.427619   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.430611   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.430952   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.430975   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.431182   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.431378   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.431566   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.431715   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.431887   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:34.432083   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:34.432112   22327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:40:34.709099   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:40:34.709122   22327 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:40:34.709129   22327 main.go:141] libmachine: (ha-113226) Calling .GetURL
	I0421 18:40:34.710361   22327 main.go:141] libmachine: (ha-113226) DBG | Using libvirt version 6000000
	I0421 18:40:34.712785   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.713172   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.713201   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.713361   22327 main.go:141] libmachine: Docker is up and running!
	I0421 18:40:34.713377   22327 main.go:141] libmachine: Reticulating splines...
	I0421 18:40:34.713385   22327 client.go:171] duration metric: took 23.27160744s to LocalClient.Create
	I0421 18:40:34.713412   22327 start.go:167] duration metric: took 23.271674332s to libmachine.API.Create "ha-113226"
	I0421 18:40:34.713424   22327 start.go:293] postStartSetup for "ha-113226" (driver="kvm2")
	I0421 18:40:34.713453   22327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:40:34.713474   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.713712   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:40:34.713735   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.715743   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.716071   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.716099   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.716181   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.716359   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.716509   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.716666   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:34.802479   22327 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:40:34.807173   22327 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:40:34.807199   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:40:34.807274   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:40:34.807366   22327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:40:34.807385   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:40:34.807493   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:40:34.818781   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:40:34.846359   22327 start.go:296] duration metric: took 132.921107ms for postStartSetup
	I0421 18:40:34.846414   22327 main.go:141] libmachine: (ha-113226) Calling .GetConfigRaw
	I0421 18:40:34.847069   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:34.849880   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.850251   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.850292   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.850485   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:40:34.850648   22327 start.go:128] duration metric: took 23.426630557s to createHost
	I0421 18:40:34.850667   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.852770   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.853063   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.853087   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.853230   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.853402   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.853574   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.853687   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.853846   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:34.854001   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:34.854018   22327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:40:34.959823   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713724834.928312409
	
	I0421 18:40:34.959848   22327 fix.go:216] guest clock: 1713724834.928312409
	I0421 18:40:34.959857   22327 fix.go:229] Guest: 2024-04-21 18:40:34.928312409 +0000 UTC Remote: 2024-04-21 18:40:34.850658084 +0000 UTC m=+23.547812524 (delta=77.654325ms)
	I0421 18:40:34.959877   22327 fix.go:200] guest clock delta is within tolerance: 77.654325ms
	I0421 18:40:34.959882   22327 start.go:83] releasing machines lock for "ha-113226", held for 23.53594762s
	I0421 18:40:34.959901   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.960163   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:34.962613   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.963001   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.963035   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.963216   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.963693   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.963860   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.963948   22327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:40:34.963984   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.964040   22327 ssh_runner.go:195] Run: cat /version.json
	I0421 18:40:34.964075   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.966434   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.966751   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.966777   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.966796   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.966910   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.967085   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.967228   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.968009   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:34.968663   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.968692   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.968900   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.969075   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.969208   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.969383   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:35.047854   22327 ssh_runner.go:195] Run: systemctl --version
	I0421 18:40:35.070662   22327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:40:35.237644   22327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:40:35.244231   22327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:40:35.244315   22327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:40:35.263802   22327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:40:35.263822   22327 start.go:494] detecting cgroup driver to use...
	I0421 18:40:35.263887   22327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:40:35.281936   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:40:35.296300   22327 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:40:35.296369   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:40:35.310821   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:40:35.325114   22327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:40:35.441304   22327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:40:35.603777   22327 docker.go:233] disabling docker service ...
	I0421 18:40:35.603839   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:40:35.620496   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:40:35.635558   22327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:40:35.755775   22327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:40:35.879362   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:40:35.896068   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:40:35.917780   22327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:40:35.917833   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.930533   22327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:40:35.930592   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.948481   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.960461   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.972842   22327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:40:35.985323   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.997730   22327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:36.017090   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:36.029406   22327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:40:36.040683   22327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:40:36.040750   22327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:40:36.056550   22327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:40:36.067473   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:40:36.191966   22327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:40:36.340108   22327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:40:36.340175   22327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:40:36.345207   22327 start.go:562] Will wait 60s for crictl version
	I0421 18:40:36.345251   22327 ssh_runner.go:195] Run: which crictl
	I0421 18:40:36.349655   22327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:40:36.392904   22327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:40:36.392988   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:40:36.426280   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:40:36.459781   22327 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:40:36.461153   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:36.463537   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:36.463894   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:36.463918   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:36.464086   22327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:40:36.468766   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:40:36.483564   22327 kubeadm.go:877] updating cluster {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:40:36.483668   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:40:36.483725   22327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:40:36.519121   22327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 18:40:36.519178   22327 ssh_runner.go:195] Run: which lz4
	I0421 18:40:36.523385   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0421 18:40:36.523488   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 18:40:36.527983   22327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 18:40:36.528012   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 18:40:38.164489   22327 crio.go:462] duration metric: took 1.641039281s to copy over tarball
	I0421 18:40:38.164556   22327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 18:40:40.683506   22327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.518924049s)
	I0421 18:40:40.683530   22327 crio.go:469] duration metric: took 2.519017711s to extract the tarball
	I0421 18:40:40.683537   22327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 18:40:40.723140   22327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:40:40.770654   22327 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:40:40.770677   22327 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:40:40.770685   22327 kubeadm.go:928] updating node { 192.168.39.60 8443 v1.30.0 crio true true} ...
	I0421 18:40:40.770798   22327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:40:40.770868   22327 ssh_runner.go:195] Run: crio config
	I0421 18:40:40.816746   22327 cni.go:84] Creating CNI manager for ""
	I0421 18:40:40.816768   22327 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 18:40:40.816781   22327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:40:40.816815   22327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-113226 NodeName:ha-113226 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:40:40.816983   22327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-113226"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:40:40.817009   22327 kube-vip.go:111] generating kube-vip config ...
	I0421 18:40:40.817063   22327 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:40:40.837871   22327 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:40:40.837989   22327 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:40:40.838043   22327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:40:40.849398   22327 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:40:40.849449   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0421 18:40:40.860358   22327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0421 18:40:40.879454   22327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:40:40.898164   22327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0421 18:40:40.916645   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0421 18:40:40.935772   22327 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:40:40.940419   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:40:40.954779   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:40:41.095630   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:40:41.115505   22327 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.60
	I0421 18:40:41.115530   22327 certs.go:194] generating shared ca certs ...
	I0421 18:40:41.115553   22327 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.115730   22327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:40:41.115791   22327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:40:41.115806   22327 certs.go:256] generating profile certs ...
	I0421 18:40:41.115871   22327 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:40:41.115890   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt with IP's: []
	I0421 18:40:41.337876   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt ...
	I0421 18:40:41.337910   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt: {Name:mk07cf03864a7605e553f54f506054e82d530dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.338086   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key ...
	I0421 18:40:41.338102   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key: {Name:mk51046988dfae73dafd5e2bb52db757d2195cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.338190   22327 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d
	I0421 18:40:41.338205   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.254]
	I0421 18:40:41.589025   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d ...
	I0421 18:40:41.589052   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d: {Name:mk407e3447bdc028cf5399a781093ec5b8197618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.589201   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d ...
	I0421 18:40:41.589213   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d: {Name:mk1ad33bf18c891f5bde4dd54410f94c60feaea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.589280   22327 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:40:41.589353   22327 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:40:41.589407   22327 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:40:41.589421   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt with IP's: []
	I0421 18:40:41.688207   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt ...
	I0421 18:40:41.688237   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt: {Name:mk383a6d0d511a7d91ac43bbafb15d715b1c50e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.688398   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key ...
	I0421 18:40:41.688411   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key: {Name:mkcbdf233bd19e5502b42d9eb3ef410542c029bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.688496   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:40:41.688513   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:40:41.688523   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:40:41.688536   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:40:41.688546   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:40:41.688559   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:40:41.688572   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:40:41.688584   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:40:41.688629   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:40:41.688670   22327 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:40:41.688679   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:40:41.688703   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:40:41.688730   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:40:41.688754   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:40:41.688793   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:40:41.688817   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:40:41.688830   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:41.688847   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:40:41.689399   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:40:41.727238   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:40:41.758482   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:40:41.788689   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:40:41.820960   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 18:40:41.849361   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:40:41.880977   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:40:41.920407   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:40:41.958263   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:40:41.985860   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:40:42.015768   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:40:42.042644   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:40:42.061470   22327 ssh_runner.go:195] Run: openssl version
	I0421 18:40:42.068277   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:40:42.082172   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:40:42.087312   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:40:42.087355   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:40:42.093988   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:40:42.108162   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:40:42.122565   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:42.128007   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:42.128050   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:42.134713   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:40:42.149732   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:40:42.162537   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:40:42.167772   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:40:42.167840   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:40:42.174259   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:40:42.186991   22327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:40:42.192079   22327 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:40:42.192129   22327 kubeadm.go:391] StartCluster: {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:40:42.192226   22327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 18:40:42.192291   22327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 18:40:42.243493   22327 cri.go:89] found id: ""
	I0421 18:40:42.243561   22327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 18:40:42.256888   22327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 18:40:42.269446   22327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 18:40:42.282243   22327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 18:40:42.282270   22327 kubeadm.go:156] found existing configuration files:
	
	I0421 18:40:42.282315   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 18:40:42.293790   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 18:40:42.293859   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 18:40:42.305322   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 18:40:42.316693   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 18:40:42.316759   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 18:40:42.330988   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 18:40:42.347215   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 18:40:42.347282   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 18:40:42.358764   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 18:40:42.369364   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 18:40:42.369411   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 18:40:42.379858   22327 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 18:40:42.487728   22327 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 18:40:42.487787   22327 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 18:40:42.622420   22327 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 18:40:42.622579   22327 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 18:40:42.622724   22327 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 18:40:42.882186   22327 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 18:40:43.082365   22327 out.go:204]   - Generating certificates and keys ...
	I0421 18:40:43.082504   22327 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 18:40:43.082582   22327 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 18:40:43.082659   22327 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 18:40:43.169123   22327 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 18:40:43.301953   22327 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 18:40:43.522237   22327 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 18:40:43.699612   22327 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 18:40:43.699764   22327 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-113226 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0421 18:40:43.835634   22327 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 18:40:43.835906   22327 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-113226 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0421 18:40:44.083423   22327 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 18:40:44.550387   22327 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 18:40:44.617550   22327 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 18:40:44.618359   22327 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 18:40:44.849445   22327 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 18:40:44.989893   22327 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 18:40:45.168919   22327 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 18:40:45.273209   22327 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 18:40:45.340972   22327 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 18:40:45.341671   22327 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 18:40:45.345091   22327 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 18:40:45.347040   22327 out.go:204]   - Booting up control plane ...
	I0421 18:40:45.347156   22327 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 18:40:45.347244   22327 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 18:40:45.348109   22327 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 18:40:45.369732   22327 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 18:40:45.370680   22327 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 18:40:45.370728   22327 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 18:40:45.503998   22327 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 18:40:45.504097   22327 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 18:40:46.004989   22327 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.400498ms
	I0421 18:40:46.005114   22327 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 18:40:55.130632   22327 kubeadm.go:309] [api-check] The API server is healthy after 9.129088309s
	I0421 18:40:55.142896   22327 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 18:40:55.157751   22327 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 18:40:55.193655   22327 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 18:40:55.193916   22327 kubeadm.go:309] [mark-control-plane] Marking the node ha-113226 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 18:40:55.205791   22327 kubeadm.go:309] [bootstrap-token] Using token: or0ghb.3tvn35rv8gqgy7dn
	I0421 18:40:55.207314   22327 out.go:204]   - Configuring RBAC rules ...
	I0421 18:40:55.207419   22327 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 18:40:55.218747   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 18:40:55.226602   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 18:40:55.232186   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 18:40:55.236480   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 18:40:55.240116   22327 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 18:40:55.537481   22327 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 18:40:55.979802   22327 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 18:40:56.537014   22327 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 18:40:56.538339   22327 kubeadm.go:309] 
	I0421 18:40:56.538393   22327 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 18:40:56.538398   22327 kubeadm.go:309] 
	I0421 18:40:56.538467   22327 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 18:40:56.538474   22327 kubeadm.go:309] 
	I0421 18:40:56.538522   22327 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 18:40:56.538593   22327 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 18:40:56.538671   22327 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 18:40:56.538714   22327 kubeadm.go:309] 
	I0421 18:40:56.538796   22327 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 18:40:56.538806   22327 kubeadm.go:309] 
	I0421 18:40:56.538872   22327 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 18:40:56.538881   22327 kubeadm.go:309] 
	I0421 18:40:56.538945   22327 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 18:40:56.539033   22327 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 18:40:56.539115   22327 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 18:40:56.539125   22327 kubeadm.go:309] 
	I0421 18:40:56.539224   22327 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 18:40:56.539310   22327 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 18:40:56.539322   22327 kubeadm.go:309] 
	I0421 18:40:56.539438   22327 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token or0ghb.3tvn35rv8gqgy7dn \
	I0421 18:40:56.539552   22327 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 18:40:56.539573   22327 kubeadm.go:309] 	--control-plane 
	I0421 18:40:56.539577   22327 kubeadm.go:309] 
	I0421 18:40:56.539693   22327 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 18:40:56.539710   22327 kubeadm.go:309] 
	I0421 18:40:56.539822   22327 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token or0ghb.3tvn35rv8gqgy7dn \
	I0421 18:40:56.539984   22327 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 18:40:56.540606   22327 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 18:40:56.540748   22327 cni.go:84] Creating CNI manager for ""
	I0421 18:40:56.540766   22327 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 18:40:56.542657   22327 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0421 18:40:56.544041   22327 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 18:40:56.551639   22327 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 18:40:56.551659   22327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0421 18:40:56.573515   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 18:40:56.929673   22327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 18:40:56.929752   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:56.929796   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-113226 minikube.k8s.io/updated_at=2024_04_21T18_40_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-113226 minikube.k8s.io/primary=true
	I0421 18:40:57.114007   22327 ops.go:34] apiserver oom_adj: -16
	I0421 18:40:57.114073   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:57.615055   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:58.114425   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:58.615043   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:59.114992   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:59.614237   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:00.114769   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:00.614210   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:01.115103   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:01.615035   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:02.115062   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:02.614249   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:03.114731   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:03.614975   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:04.115073   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:04.615064   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:05.114171   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:05.614177   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:06.114959   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:06.342460   22327 kubeadm.go:1107] duration metric: took 9.41276071s to wait for elevateKubeSystemPrivileges
	W0421 18:41:06.342505   22327 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 18:41:06.342515   22327 kubeadm.go:393] duration metric: took 24.150389266s to StartCluster
	I0421 18:41:06.342535   22327 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:06.342624   22327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:41:06.343620   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:06.343906   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 18:41:06.343925   22327 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 18:41:06.343997   22327 addons.go:69] Setting storage-provisioner=true in profile "ha-113226"
	I0421 18:41:06.343897   22327 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:41:06.344018   22327 start.go:240] waiting for startup goroutines ...
	I0421 18:41:06.344027   22327 addons.go:234] Setting addon storage-provisioner=true in "ha-113226"
	I0421 18:41:06.344035   22327 addons.go:69] Setting default-storageclass=true in profile "ha-113226"
	I0421 18:41:06.344055   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:41:06.344066   22327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-113226"
	I0421 18:41:06.344545   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:06.345126   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.345187   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.345307   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.345351   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.360730   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0421 18:41:06.361187   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.361613   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.361626   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.361917   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.362476   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.362515   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.365391   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0421 18:41:06.365768   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.366271   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.366298   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.366662   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.366853   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:06.369349   22327 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:41:06.369682   22327 kapi.go:59] client config for ha-113226: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 18:41:06.370211   22327 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 18:41:06.370432   22327 addons.go:234] Setting addon default-storageclass=true in "ha-113226"
	I0421 18:41:06.370476   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:41:06.370851   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.370914   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.377839   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0421 18:41:06.378278   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.378822   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.378856   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.379170   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.379330   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:06.380860   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:41:06.382697   22327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:41:06.384181   22327 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:41:06.384200   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 18:41:06.384218   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:41:06.385525   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0421 18:41:06.385863   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.386397   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.386415   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.386722   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.386904   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.387250   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:41:06.387269   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.387427   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.387452   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.387512   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:41:06.387634   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:41:06.387780   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:41:06.387898   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:41:06.407408   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0421 18:41:06.407764   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.408281   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.408304   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.408597   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.408839   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:06.410216   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:41:06.410444   22327 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 18:41:06.410461   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 18:41:06.410478   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:41:06.412663   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.413119   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:41:06.413142   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.413276   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:41:06.413423   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:41:06.413545   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:41:06.413723   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:41:06.526208   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 18:41:06.557558   22327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:41:06.568257   22327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 18:41:07.241782   22327 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0421 18:41:07.399295   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399317   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399375   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399392   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399587   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.399599   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.399619   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399631   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399732   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.399747   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.399733   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.399758   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399765   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399900   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.399929   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.399936   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.400041   22327 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0421 18:41:07.400048   22327 round_trippers.go:469] Request Headers:
	I0421 18:41:07.400058   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:41:07.400064   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:41:07.400111   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.400123   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.400133   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.409982   22327 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 18:41:07.410560   22327 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0421 18:41:07.410575   22327 round_trippers.go:469] Request Headers:
	I0421 18:41:07.410583   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:41:07.410588   22327 round_trippers.go:473]     Content-Type: application/json
	I0421 18:41:07.410591   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:41:07.417063   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:41:07.417250   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.417269   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.417553   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.417642   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.417659   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.419596   22327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0421 18:41:07.420983   22327 addons.go:505] duration metric: took 1.077061038s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0421 18:41:07.421022   22327 start.go:245] waiting for cluster config update ...
	I0421 18:41:07.421037   22327 start.go:254] writing updated cluster config ...
	I0421 18:41:07.422926   22327 out.go:177] 
	I0421 18:41:07.424487   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:07.424586   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:41:07.426306   22327 out.go:177] * Starting "ha-113226-m02" control-plane node in "ha-113226" cluster
	I0421 18:41:07.427509   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:41:07.427536   22327 cache.go:56] Caching tarball of preloaded images
	I0421 18:41:07.427641   22327 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:41:07.427655   22327 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:41:07.427754   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:41:07.427966   22327 start.go:360] acquireMachinesLock for ha-113226-m02: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:41:07.428021   22327 start.go:364] duration metric: took 29µs to acquireMachinesLock for "ha-113226-m02"
	I0421 18:41:07.428046   22327 start.go:93] Provisioning new machine with config: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:41:07.428143   22327 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0421 18:41:07.429960   22327 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 18:41:07.430052   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:07.430093   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:07.444971   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0421 18:41:07.445376   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:07.445783   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:07.445804   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:07.446115   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:07.446274   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:07.446405   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:07.446638   22327 start.go:159] libmachine.API.Create for "ha-113226" (driver="kvm2")
	I0421 18:41:07.446670   22327 client.go:168] LocalClient.Create starting
	I0421 18:41:07.446706   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:41:07.446745   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:41:07.446772   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:41:07.446840   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:41:07.446864   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:41:07.446881   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:41:07.446907   22327 main.go:141] libmachine: Running pre-create checks...
	I0421 18:41:07.446918   22327 main.go:141] libmachine: (ha-113226-m02) Calling .PreCreateCheck
	I0421 18:41:07.447106   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetConfigRaw
	I0421 18:41:07.447479   22327 main.go:141] libmachine: Creating machine...
	I0421 18:41:07.447500   22327 main.go:141] libmachine: (ha-113226-m02) Calling .Create
	I0421 18:41:07.447620   22327 main.go:141] libmachine: (ha-113226-m02) Creating KVM machine...
	I0421 18:41:07.449039   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found existing default KVM network
	I0421 18:41:07.449168   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found existing private KVM network mk-ha-113226
	I0421 18:41:07.449344   22327 main.go:141] libmachine: (ha-113226-m02) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02 ...
	I0421 18:41:07.449372   22327 main.go:141] libmachine: (ha-113226-m02) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:41:07.449388   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:07.449312   22722 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:41:07.449488   22327 main.go:141] libmachine: (ha-113226-m02) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:41:07.677469   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:07.677361   22722 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa...
	I0421 18:41:08.031907   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:08.031742   22722 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/ha-113226-m02.rawdisk...
	I0421 18:41:08.031954   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Writing magic tar header
	I0421 18:41:08.031981   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Writing SSH key tar header
	I0421 18:41:08.032043   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:08.031970   22722 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02 ...
	I0421 18:41:08.032190   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02
	I0421 18:41:08.032428   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:41:08.032455   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:41:08.032470   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02 (perms=drwx------)
	I0421 18:41:08.032484   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:41:08.032498   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:41:08.032507   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:41:08.032521   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home
	I0421 18:41:08.032536   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Skipping /home - not owner
	I0421 18:41:08.032547   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:41:08.032565   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:41:08.032579   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:41:08.032598   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:41:08.032612   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:41:08.032626   22327 main.go:141] libmachine: (ha-113226-m02) Creating domain...
	I0421 18:41:08.033576   22327 main.go:141] libmachine: (ha-113226-m02) define libvirt domain using xml: 
	I0421 18:41:08.033594   22327 main.go:141] libmachine: (ha-113226-m02) <domain type='kvm'>
	I0421 18:41:08.033601   22327 main.go:141] libmachine: (ha-113226-m02)   <name>ha-113226-m02</name>
	I0421 18:41:08.033607   22327 main.go:141] libmachine: (ha-113226-m02)   <memory unit='MiB'>2200</memory>
	I0421 18:41:08.033612   22327 main.go:141] libmachine: (ha-113226-m02)   <vcpu>2</vcpu>
	I0421 18:41:08.033617   22327 main.go:141] libmachine: (ha-113226-m02)   <features>
	I0421 18:41:08.033622   22327 main.go:141] libmachine: (ha-113226-m02)     <acpi/>
	I0421 18:41:08.033627   22327 main.go:141] libmachine: (ha-113226-m02)     <apic/>
	I0421 18:41:08.033632   22327 main.go:141] libmachine: (ha-113226-m02)     <pae/>
	I0421 18:41:08.033638   22327 main.go:141] libmachine: (ha-113226-m02)     
	I0421 18:41:08.033642   22327 main.go:141] libmachine: (ha-113226-m02)   </features>
	I0421 18:41:08.033647   22327 main.go:141] libmachine: (ha-113226-m02)   <cpu mode='host-passthrough'>
	I0421 18:41:08.033652   22327 main.go:141] libmachine: (ha-113226-m02)   
	I0421 18:41:08.033663   22327 main.go:141] libmachine: (ha-113226-m02)   </cpu>
	I0421 18:41:08.033669   22327 main.go:141] libmachine: (ha-113226-m02)   <os>
	I0421 18:41:08.033672   22327 main.go:141] libmachine: (ha-113226-m02)     <type>hvm</type>
	I0421 18:41:08.033678   22327 main.go:141] libmachine: (ha-113226-m02)     <boot dev='cdrom'/>
	I0421 18:41:08.033683   22327 main.go:141] libmachine: (ha-113226-m02)     <boot dev='hd'/>
	I0421 18:41:08.033689   22327 main.go:141] libmachine: (ha-113226-m02)     <bootmenu enable='no'/>
	I0421 18:41:08.033694   22327 main.go:141] libmachine: (ha-113226-m02)   </os>
	I0421 18:41:08.033699   22327 main.go:141] libmachine: (ha-113226-m02)   <devices>
	I0421 18:41:08.033707   22327 main.go:141] libmachine: (ha-113226-m02)     <disk type='file' device='cdrom'>
	I0421 18:41:08.033721   22327 main.go:141] libmachine: (ha-113226-m02)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/boot2docker.iso'/>
	I0421 18:41:08.033733   22327 main.go:141] libmachine: (ha-113226-m02)       <target dev='hdc' bus='scsi'/>
	I0421 18:41:08.033754   22327 main.go:141] libmachine: (ha-113226-m02)       <readonly/>
	I0421 18:41:08.033772   22327 main.go:141] libmachine: (ha-113226-m02)     </disk>
	I0421 18:41:08.033783   22327 main.go:141] libmachine: (ha-113226-m02)     <disk type='file' device='disk'>
	I0421 18:41:08.033793   22327 main.go:141] libmachine: (ha-113226-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:41:08.033807   22327 main.go:141] libmachine: (ha-113226-m02)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/ha-113226-m02.rawdisk'/>
	I0421 18:41:08.033814   22327 main.go:141] libmachine: (ha-113226-m02)       <target dev='hda' bus='virtio'/>
	I0421 18:41:08.033820   22327 main.go:141] libmachine: (ha-113226-m02)     </disk>
	I0421 18:41:08.033828   22327 main.go:141] libmachine: (ha-113226-m02)     <interface type='network'>
	I0421 18:41:08.033833   22327 main.go:141] libmachine: (ha-113226-m02)       <source network='mk-ha-113226'/>
	I0421 18:41:08.033838   22327 main.go:141] libmachine: (ha-113226-m02)       <model type='virtio'/>
	I0421 18:41:08.033846   22327 main.go:141] libmachine: (ha-113226-m02)     </interface>
	I0421 18:41:08.033850   22327 main.go:141] libmachine: (ha-113226-m02)     <interface type='network'>
	I0421 18:41:08.033879   22327 main.go:141] libmachine: (ha-113226-m02)       <source network='default'/>
	I0421 18:41:08.033903   22327 main.go:141] libmachine: (ha-113226-m02)       <model type='virtio'/>
	I0421 18:41:08.033917   22327 main.go:141] libmachine: (ha-113226-m02)     </interface>
	I0421 18:41:08.033929   22327 main.go:141] libmachine: (ha-113226-m02)     <serial type='pty'>
	I0421 18:41:08.033942   22327 main.go:141] libmachine: (ha-113226-m02)       <target port='0'/>
	I0421 18:41:08.033953   22327 main.go:141] libmachine: (ha-113226-m02)     </serial>
	I0421 18:41:08.033962   22327 main.go:141] libmachine: (ha-113226-m02)     <console type='pty'>
	I0421 18:41:08.033973   22327 main.go:141] libmachine: (ha-113226-m02)       <target type='serial' port='0'/>
	I0421 18:41:08.033982   22327 main.go:141] libmachine: (ha-113226-m02)     </console>
	I0421 18:41:08.033989   22327 main.go:141] libmachine: (ha-113226-m02)     <rng model='virtio'>
	I0421 18:41:08.034004   22327 main.go:141] libmachine: (ha-113226-m02)       <backend model='random'>/dev/random</backend>
	I0421 18:41:08.034017   22327 main.go:141] libmachine: (ha-113226-m02)     </rng>
	I0421 18:41:08.034024   22327 main.go:141] libmachine: (ha-113226-m02)     
	I0421 18:41:08.034050   22327 main.go:141] libmachine: (ha-113226-m02)     
	I0421 18:41:08.034100   22327 main.go:141] libmachine: (ha-113226-m02)   </devices>
	I0421 18:41:08.034113   22327 main.go:141] libmachine: (ha-113226-m02) </domain>
	I0421 18:41:08.034123   22327 main.go:141] libmachine: (ha-113226-m02) 
	I0421 18:41:08.040923   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:71:77:f4 in network default
	I0421 18:41:08.041467   22327 main.go:141] libmachine: (ha-113226-m02) Ensuring networks are active...
	I0421 18:41:08.041487   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:08.042146   22327 main.go:141] libmachine: (ha-113226-m02) Ensuring network default is active
	I0421 18:41:08.042501   22327 main.go:141] libmachine: (ha-113226-m02) Ensuring network mk-ha-113226 is active
	I0421 18:41:08.042871   22327 main.go:141] libmachine: (ha-113226-m02) Getting domain xml...
	I0421 18:41:08.043522   22327 main.go:141] libmachine: (ha-113226-m02) Creating domain...
	I0421 18:41:09.277030   22327 main.go:141] libmachine: (ha-113226-m02) Waiting to get IP...
	I0421 18:41:09.277872   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:09.278407   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:09.278429   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:09.278383   22722 retry.go:31] will retry after 263.544195ms: waiting for machine to come up
	I0421 18:41:09.544042   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:09.544596   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:09.544623   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:09.544561   22722 retry.go:31] will retry after 314.37187ms: waiting for machine to come up
	I0421 18:41:09.859966   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:09.860460   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:09.860483   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:09.860426   22722 retry.go:31] will retry after 403.379124ms: waiting for machine to come up
	I0421 18:41:10.264830   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:10.265239   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:10.265263   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:10.265211   22722 retry.go:31] will retry after 570.842593ms: waiting for machine to come up
	I0421 18:41:10.837904   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:10.838340   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:10.838363   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:10.838287   22722 retry.go:31] will retry after 563.730901ms: waiting for machine to come up
	I0421 18:41:11.403949   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:11.404374   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:11.404411   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:11.404336   22722 retry.go:31] will retry after 624.074886ms: waiting for machine to come up
	I0421 18:41:12.029954   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:12.030595   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:12.030625   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:12.030548   22722 retry.go:31] will retry after 816.379918ms: waiting for machine to come up
	I0421 18:41:12.848209   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:12.848659   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:12.848688   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:12.848617   22722 retry.go:31] will retry after 1.033034557s: waiting for machine to come up
	I0421 18:41:13.883601   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:13.883983   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:13.884018   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:13.883940   22722 retry.go:31] will retry after 1.604433858s: waiting for machine to come up
	I0421 18:41:15.490700   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:15.491113   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:15.491143   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:15.491065   22722 retry.go:31] will retry after 1.927254199s: waiting for machine to come up
	I0421 18:41:17.419508   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:17.419918   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:17.419950   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:17.419902   22722 retry.go:31] will retry after 2.429342073s: waiting for machine to come up
	I0421 18:41:19.850459   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:19.850904   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:19.850930   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:19.850863   22722 retry.go:31] will retry after 2.535315039s: waiting for machine to come up
	I0421 18:41:22.388249   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:22.388723   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:22.388749   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:22.388682   22722 retry.go:31] will retry after 3.428684679s: waiting for machine to come up
	I0421 18:41:25.819051   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:25.819520   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:25.819547   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:25.819474   22722 retry.go:31] will retry after 4.932403392s: waiting for machine to come up
	I0421 18:41:30.755560   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.756031   22327 main.go:141] libmachine: (ha-113226-m02) Found IP for machine: 192.168.39.233
	I0421 18:41:30.756057   22327 main.go:141] libmachine: (ha-113226-m02) Reserving static IP address...
	I0421 18:41:30.756069   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has current primary IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.756397   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find host DHCP lease matching {name: "ha-113226-m02", mac: "52:54:00:4f:2c:56", ip: "192.168.39.233"} in network mk-ha-113226
	I0421 18:41:30.828076   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Getting to WaitForSSH function...
	I0421 18:41:30.828110   22327 main.go:141] libmachine: (ha-113226-m02) Reserved static IP address: 192.168.39.233
	I0421 18:41:30.828132   22327 main.go:141] libmachine: (ha-113226-m02) Waiting for SSH to be available...
	I0421 18:41:30.830409   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.830762   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:30.830786   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.830916   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Using SSH client type: external
	I0421 18:41:30.830944   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa (-rw-------)
	I0421 18:41:30.830973   22327 main.go:141] libmachine: (ha-113226-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:41:30.830993   22327 main.go:141] libmachine: (ha-113226-m02) DBG | About to run SSH command:
	I0421 18:41:30.831011   22327 main.go:141] libmachine: (ha-113226-m02) DBG | exit 0
	I0421 18:41:30.954705   22327 main.go:141] libmachine: (ha-113226-m02) DBG | SSH cmd err, output: <nil>: 
	I0421 18:41:30.954934   22327 main.go:141] libmachine: (ha-113226-m02) KVM machine creation complete!
	I0421 18:41:30.955258   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetConfigRaw
	I0421 18:41:30.955748   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:30.955937   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:30.956072   22327 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:41:30.956083   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:41:30.957450   22327 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:41:30.957468   22327 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:41:30.957475   22327 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:41:30.957482   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:30.959523   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.959883   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:30.959911   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.960012   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:30.960181   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:30.960368   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:30.960546   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:30.960719   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:30.960918   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:30.960929   22327 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:41:31.062147   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:41:31.062177   22327 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:41:31.062187   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.064786   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.065144   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.065176   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.065288   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.065458   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.065630   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.065764   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.065979   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.066213   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.066228   22327 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:41:31.167215   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:41:31.167308   22327 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:41:31.167324   22327 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:41:31.167335   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:31.167592   22327 buildroot.go:166] provisioning hostname "ha-113226-m02"
	I0421 18:41:31.167624   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:31.167843   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.170564   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.170969   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.171002   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.171200   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.171379   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.171546   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.171694   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.171873   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.172089   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.172108   22327 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226-m02 && echo "ha-113226-m02" | sudo tee /etc/hostname
	I0421 18:41:31.291064   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226-m02
	
	I0421 18:41:31.291121   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.294169   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.294640   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.294672   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.294831   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.295021   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.295188   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.295338   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.295508   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.295669   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.295685   22327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:41:31.404406   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:41:31.404431   22327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:41:31.404444   22327 buildroot.go:174] setting up certificates
	I0421 18:41:31.404452   22327 provision.go:84] configureAuth start
	I0421 18:41:31.404463   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:31.404727   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:31.407309   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.407631   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.407650   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.407912   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.410073   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.410371   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.410394   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.410523   22327 provision.go:143] copyHostCerts
	I0421 18:41:31.410547   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:41:31.410573   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:41:31.410582   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:41:31.410641   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:41:31.410712   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:41:31.410732   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:41:31.410736   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:41:31.410759   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:41:31.410800   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:41:31.410816   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:41:31.410822   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:41:31.410841   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:41:31.410886   22327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226-m02 san=[127.0.0.1 192.168.39.233 ha-113226-m02 localhost minikube]
	I0421 18:41:31.532353   22327 provision.go:177] copyRemoteCerts
	I0421 18:41:31.532405   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:41:31.532428   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.534989   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.535344   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.535380   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.535524   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.535690   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.535836   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.535959   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:31.617511   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:41:31.617593   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 18:41:31.645600   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:41:31.645661   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:41:31.671982   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:41:31.672047   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 18:41:31.699144   22327 provision.go:87] duration metric: took 294.678995ms to configureAuth
	I0421 18:41:31.699171   22327 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:41:31.699342   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:31.699431   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.702019   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.702444   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.702470   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.702623   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.702820   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.703023   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.703171   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.703345   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.703543   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.703558   22327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:41:31.986115   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:41:31.986143   22327 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:41:31.986154   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetURL
	I0421 18:41:31.987310   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Using libvirt version 6000000
	I0421 18:41:31.989434   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.989816   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.989863   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.990014   22327 main.go:141] libmachine: Docker is up and running!
	I0421 18:41:31.990031   22327 main.go:141] libmachine: Reticulating splines...
	I0421 18:41:31.990039   22327 client.go:171] duration metric: took 24.543360917s to LocalClient.Create
	I0421 18:41:31.990078   22327 start.go:167] duration metric: took 24.543441614s to libmachine.API.Create "ha-113226"
	I0421 18:41:31.990093   22327 start.go:293] postStartSetup for "ha-113226-m02" (driver="kvm2")
	I0421 18:41:31.990108   22327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:41:31.990128   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:31.990355   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:41:31.990377   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.992571   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.992920   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.992946   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.993048   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.993211   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.993348   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.993479   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:32.075424   22327 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:41:32.080586   22327 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:41:32.080613   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:41:32.080685   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:41:32.080758   22327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:41:32.080770   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:41:32.080864   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:41:32.091018   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:41:32.119003   22327 start.go:296] duration metric: took 128.894041ms for postStartSetup
	I0421 18:41:32.119052   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetConfigRaw
	I0421 18:41:32.119702   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:32.122281   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.122634   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.122655   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.122936   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:41:32.123151   22327 start.go:128] duration metric: took 24.694989634s to createHost
	I0421 18:41:32.123175   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:32.125395   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.125656   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.125694   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.125820   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:32.125994   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.126140   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.126243   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:32.126388   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:32.126534   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:32.126545   22327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:41:32.227071   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713724892.197171216
	
	I0421 18:41:32.227097   22327 fix.go:216] guest clock: 1713724892.197171216
	I0421 18:41:32.227104   22327 fix.go:229] Guest: 2024-04-21 18:41:32.197171216 +0000 UTC Remote: 2024-04-21 18:41:32.123164613 +0000 UTC m=+80.820319053 (delta=74.006603ms)
	I0421 18:41:32.227119   22327 fix.go:200] guest clock delta is within tolerance: 74.006603ms
	I0421 18:41:32.227124   22327 start.go:83] releasing machines lock for "ha-113226-m02", held for 24.799092085s
	I0421 18:41:32.227141   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.227394   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:32.230084   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.230466   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.230492   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.233141   22327 out.go:177] * Found network options:
	I0421 18:41:32.234790   22327 out.go:177]   - NO_PROXY=192.168.39.60
	W0421 18:41:32.236133   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:41:32.236186   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.236815   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.236996   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.237083   22327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:41:32.237123   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	W0421 18:41:32.237218   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:41:32.237300   22327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:41:32.237325   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:32.239834   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240100   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240208   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.240236   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240389   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:32.240499   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.240528   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.240527   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240693   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:32.240695   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:32.240885   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:32.240902   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.241035   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:32.241137   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:32.491933   22327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:41:32.498595   22327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:41:32.498672   22327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:41:32.522547   22327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:41:32.522573   22327 start.go:494] detecting cgroup driver to use...
	I0421 18:41:32.522632   22327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:41:32.547775   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:41:32.563316   22327 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:41:32.563367   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:41:32.578972   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:41:32.593734   22327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:41:32.727976   22327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:41:32.884071   22327 docker.go:233] disabling docker service ...
	I0421 18:41:32.884133   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:41:32.900565   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:41:32.914082   22327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:41:33.062759   22327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:41:33.190485   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:41:33.207746   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:41:33.228289   22327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:41:33.228356   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.241881   22327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:41:33.241949   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.254726   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.266578   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.278457   22327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:41:33.290519   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.302272   22327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.321338   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.334037   22327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:41:33.345455   22327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:41:33.345503   22327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:41:33.360481   22327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:41:33.372097   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:41:33.488170   22327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:41:33.642971   22327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:41:33.643049   22327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:41:33.648549   22327 start.go:562] Will wait 60s for crictl version
	I0421 18:41:33.648606   22327 ssh_runner.go:195] Run: which crictl
	I0421 18:41:33.653179   22327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:41:33.694505   22327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:41:33.694566   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:41:33.725391   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:41:33.762152   22327 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:41:33.763577   22327 out.go:177]   - env NO_PROXY=192.168.39.60
	I0421 18:41:33.764790   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:33.767163   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:33.767600   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:33.767633   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:33.767797   22327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:41:33.773374   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:41:33.788650   22327 mustload.go:65] Loading cluster: ha-113226
	I0421 18:41:33.788874   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:33.789129   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:33.789157   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:33.803420   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0421 18:41:33.803803   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:33.804361   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:33.804381   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:33.804783   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:33.804993   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:33.806740   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:41:33.807049   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:33.807072   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:33.821142   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37687
	I0421 18:41:33.821741   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:33.822132   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:33.822154   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:33.822477   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:33.822654   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:41:33.822814   22327 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.233
	I0421 18:41:33.822825   22327 certs.go:194] generating shared ca certs ...
	I0421 18:41:33.822842   22327 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:33.822974   22327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:41:33.823016   22327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:41:33.823025   22327 certs.go:256] generating profile certs ...
	I0421 18:41:33.823095   22327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:41:33.823119   22327 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886
	I0421 18:41:33.823132   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.233 192.168.39.254]
	I0421 18:41:34.029355   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886 ...
	I0421 18:41:34.029382   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886: {Name:mk42199ee0de701846fe5b05e91e06a1c77e212f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:34.029560   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886 ...
	I0421 18:41:34.029577   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886: {Name:mk70f9a427951197d7f02d7d00c32af57a972251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:34.029676   22327 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:41:34.029806   22327 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:41:34.029925   22327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:41:34.029941   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:41:34.029952   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:41:34.029962   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:41:34.029975   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:41:34.029985   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:41:34.029996   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:41:34.030007   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:41:34.030019   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:41:34.030089   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:41:34.030126   22327 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:41:34.030135   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:41:34.030156   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:41:34.030176   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:41:34.030202   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:41:34.030246   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:41:34.030273   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.030287   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.030300   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.030328   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:41:34.032917   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:34.033260   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:41:34.033290   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:34.033418   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:41:34.033624   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:41:34.033777   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:41:34.033901   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:41:34.110356   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 18:41:34.116257   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 18:41:34.128143   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 18:41:34.133441   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 18:41:34.146570   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 18:41:34.151351   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 18:41:34.163721   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 18:41:34.168800   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 18:41:34.189063   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 18:41:34.193942   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 18:41:34.205764   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 18:41:34.210551   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0421 18:41:34.222843   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:41:34.251304   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:41:34.277971   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:41:34.304333   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:41:34.331123   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0421 18:41:34.356975   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 18:41:34.383415   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:41:34.408381   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:41:34.434315   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:41:34.460926   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:41:34.488035   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:41:34.513110   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 18:41:34.531510   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 18:41:34.550153   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 18:41:34.569055   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 18:41:34.588137   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 18:41:34.608510   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0421 18:41:34.628141   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 18:41:34.645910   22327 ssh_runner.go:195] Run: openssl version
	I0421 18:41:34.652072   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:41:34.664058   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.668808   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.668855   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.675366   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:41:34.688325   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:41:34.701131   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.706261   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.706309   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.712415   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:41:34.724121   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:41:34.736978   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.741933   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.741983   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.747903   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:41:34.760620   22327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:41:34.765641   22327 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:41:34.765735   22327 kubeadm.go:928] updating node {m02 192.168.39.233 8443 v1.30.0 crio true true} ...
	I0421 18:41:34.765892   22327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:41:34.765936   22327 kube-vip.go:111] generating kube-vip config ...
	I0421 18:41:34.765979   22327 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:41:34.786047   22327 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:41:34.786116   22327 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:41:34.786171   22327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:41:34.799352   22327 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 18:41:34.799457   22327 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 18:41:34.811282   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0421 18:41:34.811313   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:41:34.811333   22327 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0421 18:41:34.811375   22327 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0421 18:41:34.811388   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:41:34.816951   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 18:41:34.816974   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 18:42:06.213874   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:42:06.213954   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:42:06.220168   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 18:42:06.220202   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 18:42:34.733837   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:42:34.750974   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:42:34.751083   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:42:34.756492   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 18:42:34.756525   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 18:42:35.225131   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 18:42:35.236371   22327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 18:42:35.256435   22327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:42:35.274746   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 18:42:35.294770   22327 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:42:35.299386   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:42:35.314917   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:42:35.444048   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:42:35.462102   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:42:35.462459   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:42:35.462490   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:42:35.477409   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0421 18:42:35.477847   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:42:35.478377   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:42:35.478404   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:42:35.478731   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:42:35.478934   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:42:35.479125   22327 start.go:316] joinCluster: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:42:35.479245   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 18:42:35.479266   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:42:35.482274   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:42:35.482698   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:42:35.482740   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:42:35.482920   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:42:35.483131   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:42:35.483292   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:42:35.483468   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:42:35.657255   22327 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:42:35.657356   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hv1tgo.edjk7g6dh6kic30b --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m02 --control-plane --apiserver-advertise-address=192.168.39.233 --apiserver-bind-port=8443"
	I0421 18:42:58.476187   22327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hv1tgo.edjk7g6dh6kic30b --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m02 --control-plane --apiserver-advertise-address=192.168.39.233 --apiserver-bind-port=8443": (22.818796491s)
	I0421 18:42:58.476227   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 18:42:59.060505   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-113226-m02 minikube.k8s.io/updated_at=2024_04_21T18_42_59_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-113226 minikube.k8s.io/primary=false
	I0421 18:42:59.200952   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-113226-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 18:42:59.318871   22327 start.go:318] duration metric: took 23.839742493s to joinCluster
	I0421 18:42:59.318957   22327 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:42:59.320426   22327 out.go:177] * Verifying Kubernetes components...
	I0421 18:42:59.319266   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:42:59.321784   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:42:59.577798   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:42:59.662837   22327 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:42:59.663046   22327 kapi.go:59] client config for ha-113226: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 18:42:59.663129   22327 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0421 18:42:59.663334   22327 node_ready.go:35] waiting up to 6m0s for node "ha-113226-m02" to be "Ready" ...
	I0421 18:42:59.663433   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:42:59.663441   22327 round_trippers.go:469] Request Headers:
	I0421 18:42:59.663449   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:42:59.663453   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:42:59.675124   22327 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 18:43:00.163842   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:00.163863   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:00.163871   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:00.163875   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:00.175446   22327 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 18:43:00.663890   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:00.663914   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:00.663923   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:00.663927   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:00.669145   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:01.164255   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:01.164274   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:01.164284   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:01.164290   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:01.168735   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:01.663577   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:01.663603   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:01.663612   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:01.663616   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:01.668364   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:01.669245   22327 node_ready.go:53] node "ha-113226-m02" has status "Ready":"False"
	I0421 18:43:02.163638   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:02.163664   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:02.163671   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:02.163676   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:02.167128   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:02.664253   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:02.664296   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:02.664307   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:02.664314   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:02.667211   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:03.164498   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:03.164518   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:03.164527   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:03.164529   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:03.168450   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:03.663538   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:03.663559   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:03.663567   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:03.663570   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:03.669151   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:03.669919   22327 node_ready.go:53] node "ha-113226-m02" has status "Ready":"False"
	I0421 18:43:04.164139   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:04.164165   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:04.164175   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:04.164182   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:04.169937   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:04.663857   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:04.663879   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:04.663889   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:04.663894   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:04.667722   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:05.163827   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:05.163849   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:05.163857   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:05.163864   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:05.167691   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:05.664121   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:05.664160   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:05.664168   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:05.664171   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:05.667759   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:06.164027   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:06.164051   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:06.164060   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:06.164066   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:06.167829   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:06.168744   22327 node_ready.go:53] node "ha-113226-m02" has status "Ready":"False"
	I0421 18:43:06.663968   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:06.663992   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:06.664000   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:06.664003   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:06.668068   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:07.164218   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:07.164236   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:07.164243   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:07.164246   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:07.167556   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:07.664052   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:07.664080   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:07.664090   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:07.664095   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:07.667546   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:08.163677   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:08.163696   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.163703   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.163707   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.169726   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:08.170739   22327 node_ready.go:49] node "ha-113226-m02" has status "Ready":"True"
	I0421 18:43:08.170756   22327 node_ready.go:38] duration metric: took 8.507391931s for node "ha-113226-m02" to be "Ready" ...
	I0421 18:43:08.170769   22327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:43:08.170850   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:08.170860   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.170867   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.170872   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.175673   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:08.182999   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.183061   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n8sbt
	I0421 18:43:08.183070   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.183077   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.183081   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.185657   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.186391   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:08.186405   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.186412   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.186416   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.192600   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:08.193122   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:08.193138   22327 pod_ready.go:81] duration metric: took 10.120033ms for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.193150   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.193211   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zhskp
	I0421 18:43:08.193222   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.193232   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.193244   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.195933   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.196692   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:08.196706   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.196716   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.196721   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.199041   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.199607   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:08.199626   22327 pod_ready.go:81] duration metric: took 6.468093ms for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.199637   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.199678   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226
	I0421 18:43:08.199685   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.199692   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.199697   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.202002   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.202686   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:08.202699   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.202706   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.202710   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.204929   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.205609   22327 pod_ready.go:92] pod "etcd-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:08.205627   22327 pod_ready.go:81] duration metric: took 5.983588ms for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.205638   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.205687   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:08.205698   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.205708   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.205726   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.207996   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.209058   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:08.209073   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.209079   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.209083   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.211914   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.705961   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:08.705984   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.705992   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.705996   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.709215   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:08.709932   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:08.709948   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.709955   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.709958   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.713090   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:09.206310   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:09.206331   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.206339   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.206348   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.209837   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:09.210511   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:09.210525   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.210532   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.210537   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.213428   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:09.706298   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:09.706320   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.706328   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.706332   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.709783   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:09.710698   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:09.710714   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.710721   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.710726   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.713439   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:10.206480   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:10.206499   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.206506   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.206510   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.209906   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:10.210574   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:10.210588   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.210594   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.210597   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.213381   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:10.213996   22327 pod_ready.go:102] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"False"
	I0421 18:43:10.706731   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:10.706751   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.706759   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.706763   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.710560   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:10.711506   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:10.711524   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.711534   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.711540   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.714802   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:11.205901   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:11.205925   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.205933   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.205936   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.209465   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:11.210606   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:11.210620   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.210628   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.210631   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.213578   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:11.706583   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:11.706607   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.706615   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.706619   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.709755   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:11.710680   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:11.710695   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.710702   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.710706   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.713436   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:12.206838   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:12.206858   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.206865   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.206870   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.210978   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:12.211786   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:12.211801   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.211808   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.211811   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.214802   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:12.215457   22327 pod_ready.go:102] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"False"
	I0421 18:43:12.706314   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:12.706335   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.706343   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.706348   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.715912   22327 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 18:43:12.716796   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:12.716810   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.716817   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.716820   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.723311   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:13.206422   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:13.206442   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.206450   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.206454   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.210583   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:13.211537   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:13.211552   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.211559   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.211564   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.214596   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:13.706297   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:13.706326   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.706336   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.706341   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.709424   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:13.710074   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:13.710087   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.710097   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.710103   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.713769   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:14.206579   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:14.206607   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.206617   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.206623   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.210189   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:14.210935   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:14.210950   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.210957   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.210964   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.213937   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:14.706103   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:14.706127   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.706136   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.706141   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.711621   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:14.712567   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:14.712586   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.712598   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.712602   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.715648   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:14.716207   22327 pod_ready.go:102] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"False"
	I0421 18:43:15.206720   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:15.206745   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.206753   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.206758   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.210676   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:15.211393   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:15.211409   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.211419   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.211423   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.214588   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:15.705919   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:15.705942   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.705950   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.705954   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.710695   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:15.711698   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:15.711712   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.711720   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.711724   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.715433   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.206082   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:16.206101   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.206108   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.206117   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.209694   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.210466   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:16.210483   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.210489   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.210495   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.213284   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.213983   22327 pod_ready.go:92] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.214002   22327 pod_ready.go:81] duration metric: took 8.00835575s for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.214021   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.214151   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226
	I0421 18:43:16.214165   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.214175   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.214186   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.221621   22327 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 18:43:16.222382   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.222401   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.222409   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.222414   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.224694   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.225320   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.225340   22327 pod_ready.go:81] duration metric: took 11.309161ms for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.225352   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.225405   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226-m02
	I0421 18:43:16.225416   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.225426   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.225435   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.228686   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.229568   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:16.229585   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.229593   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.229597   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.232962   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.233488   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.233502   22327 pod_ready.go:81] duration metric: took 8.143635ms for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.233511   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.233553   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226
	I0421 18:43:16.233560   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.233567   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.233572   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.236070   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.236695   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.236709   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.236715   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.236718   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.239550   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.240022   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.240038   22327 pod_ready.go:81] duration metric: took 6.518593ms for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.240046   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.240088   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m02
	I0421 18:43:16.240095   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.240101   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.240104   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.242552   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.243030   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:16.243043   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.243050   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.243053   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.245520   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.246089   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.246105   22327 pod_ready.go:81] duration metric: took 6.052729ms for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.246113   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.406449   22327 request.go:629] Waited for 160.279782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:43:16.406507   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:43:16.406512   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.406519   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.406524   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.410076   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.606365   22327 request.go:629] Waited for 195.467139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.606428   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.606434   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.606441   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.606448   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.613365   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:16.614135   22327 pod_ready.go:92] pod "kube-proxy-h75dp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.614155   22327 pod_ready.go:81] duration metric: took 368.036366ms for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.614166   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.806362   22327 request.go:629] Waited for 192.134324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:43:16.806447   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:43:16.806453   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.806460   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.806466   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.810569   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:17.006920   22327 request.go:629] Waited for 195.417936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.006998   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.007006   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.007016   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.007023   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.010869   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.011538   22327 pod_ready.go:92] pod "kube-proxy-nsv74" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:17.011558   22327 pod_ready.go:81] duration metric: took 397.385262ms for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.011572   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.206608   22327 request.go:629] Waited for 194.957893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:43:17.206691   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:43:17.206699   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.206708   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.206718   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.210673   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.406933   22327 request.go:629] Waited for 195.36392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:17.407049   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:17.407059   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.407066   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.407071   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.410934   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.411897   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:17.411918   22327 pod_ready.go:81] duration metric: took 400.337546ms for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.411932   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.606521   22327 request.go:629] Waited for 194.509454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:43:17.606608   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:43:17.606615   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.606625   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.606644   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.611064   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:17.807083   22327 request.go:629] Waited for 195.387551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.807142   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.807158   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.807171   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.807178   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.810959   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.811539   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:17.811556   22327 pod_ready.go:81] duration metric: took 399.608297ms for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.811568   22327 pod_ready.go:38] duration metric: took 9.640761216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:43:17.811586   22327 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:43:17.811648   22327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:43:17.830033   22327 api_server.go:72] duration metric: took 18.511038156s to wait for apiserver process to appear ...
	I0421 18:43:17.830054   22327 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:43:17.830094   22327 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0421 18:43:17.836803   22327 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0421 18:43:17.836865   22327 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0421 18:43:17.836872   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.836885   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.836890   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.837764   22327 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0421 18:43:17.837852   22327 api_server.go:141] control plane version: v1.30.0
	I0421 18:43:17.837866   22327 api_server.go:131] duration metric: took 7.796464ms to wait for apiserver health ...
	I0421 18:43:17.837872   22327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:43:18.006236   22327 request.go:629] Waited for 168.289504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.006290   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.006295   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.006302   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.006305   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.012881   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:18.017830   22327 system_pods.go:59] 17 kube-system pods found
	I0421 18:43:18.017859   22327 system_pods.go:61] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:43:18.017870   22327 system_pods.go:61] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:43:18.017878   22327 system_pods.go:61] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:43:18.017881   22327 system_pods.go:61] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:43:18.017885   22327 system_pods.go:61] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:43:18.017888   22327 system_pods.go:61] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:43:18.017891   22327 system_pods.go:61] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:43:18.017894   22327 system_pods.go:61] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:43:18.017897   22327 system_pods.go:61] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:43:18.017900   22327 system_pods.go:61] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:43:18.017903   22327 system_pods.go:61] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:43:18.017906   22327 system_pods.go:61] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:43:18.017909   22327 system_pods.go:61] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:43:18.017912   22327 system_pods.go:61] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:43:18.017916   22327 system_pods.go:61] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:43:18.017918   22327 system_pods.go:61] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:43:18.017921   22327 system_pods.go:61] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:43:18.017927   22327 system_pods.go:74] duration metric: took 180.049975ms to wait for pod list to return data ...
	I0421 18:43:18.017938   22327 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:43:18.206377   22327 request.go:629] Waited for 188.343612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:43:18.206447   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:43:18.206455   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.206464   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.206472   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.209855   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:18.210091   22327 default_sa.go:45] found service account: "default"
	I0421 18:43:18.210111   22327 default_sa.go:55] duration metric: took 192.167076ms for default service account to be created ...
	I0421 18:43:18.210123   22327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:43:18.406546   22327 request.go:629] Waited for 196.356952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.406625   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.406630   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.406637   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.406644   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.412468   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:18.417535   22327 system_pods.go:86] 17 kube-system pods found
	I0421 18:43:18.417558   22327 system_pods.go:89] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:43:18.417563   22327 system_pods.go:89] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:43:18.417568   22327 system_pods.go:89] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:43:18.417572   22327 system_pods.go:89] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:43:18.417576   22327 system_pods.go:89] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:43:18.417581   22327 system_pods.go:89] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:43:18.417586   22327 system_pods.go:89] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:43:18.417590   22327 system_pods.go:89] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:43:18.417594   22327 system_pods.go:89] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:43:18.417598   22327 system_pods.go:89] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:43:18.417602   22327 system_pods.go:89] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:43:18.417607   22327 system_pods.go:89] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:43:18.417610   22327 system_pods.go:89] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:43:18.417617   22327 system_pods.go:89] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:43:18.417620   22327 system_pods.go:89] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:43:18.417623   22327 system_pods.go:89] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:43:18.417633   22327 system_pods.go:89] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:43:18.417640   22327 system_pods.go:126] duration metric: took 207.510678ms to wait for k8s-apps to be running ...
	I0421 18:43:18.417649   22327 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:43:18.417688   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:43:18.435157   22327 system_svc.go:56] duration metric: took 17.498178ms WaitForService to wait for kubelet
	I0421 18:43:18.435194   22327 kubeadm.go:576] duration metric: took 19.116202297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:43:18.435214   22327 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:43:18.606624   22327 request.go:629] Waited for 171.332169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0421 18:43:18.606699   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0421 18:43:18.606705   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.606713   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.606723   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.613229   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:18.614132   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:43:18.614155   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:43:18.614167   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:43:18.614171   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:43:18.614175   22327 node_conditions.go:105] duration metric: took 178.956677ms to run NodePressure ...
	I0421 18:43:18.614186   22327 start.go:240] waiting for startup goroutines ...
	I0421 18:43:18.614207   22327 start.go:254] writing updated cluster config ...
	I0421 18:43:18.616231   22327 out.go:177] 
	I0421 18:43:18.618028   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:43:18.618130   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:43:18.620030   22327 out.go:177] * Starting "ha-113226-m03" control-plane node in "ha-113226" cluster
	I0421 18:43:18.621642   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:43:18.621669   22327 cache.go:56] Caching tarball of preloaded images
	I0421 18:43:18.621783   22327 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:43:18.621798   22327 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:43:18.621932   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:43:18.622141   22327 start.go:360] acquireMachinesLock for ha-113226-m03: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:43:18.622184   22327 start.go:364] duration metric: took 23.454µs to acquireMachinesLock for "ha-113226-m03"
	I0421 18:43:18.622201   22327 start.go:93] Provisioning new machine with config: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:43:18.622327   22327 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0421 18:43:18.623934   22327 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 18:43:18.624010   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:18.624040   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:18.638967   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I0421 18:43:18.639331   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:18.639782   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:18.639801   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:18.640111   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:18.640292   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:18.640442   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:18.640574   22327 start.go:159] libmachine.API.Create for "ha-113226" (driver="kvm2")
	I0421 18:43:18.640605   22327 client.go:168] LocalClient.Create starting
	I0421 18:43:18.640635   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:43:18.640665   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:43:18.640679   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:43:18.640725   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:43:18.640745   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:43:18.640756   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:43:18.640772   22327 main.go:141] libmachine: Running pre-create checks...
	I0421 18:43:18.640781   22327 main.go:141] libmachine: (ha-113226-m03) Calling .PreCreateCheck
	I0421 18:43:18.640931   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetConfigRaw
	I0421 18:43:18.641262   22327 main.go:141] libmachine: Creating machine...
	I0421 18:43:18.641275   22327 main.go:141] libmachine: (ha-113226-m03) Calling .Create
	I0421 18:43:18.641396   22327 main.go:141] libmachine: (ha-113226-m03) Creating KVM machine...
	I0421 18:43:18.642673   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found existing default KVM network
	I0421 18:43:18.642837   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found existing private KVM network mk-ha-113226
	I0421 18:43:18.642973   22327 main.go:141] libmachine: (ha-113226-m03) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03 ...
	I0421 18:43:18.642998   22327 main.go:141] libmachine: (ha-113226-m03) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:43:18.643012   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:18.642938   23334 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:43:18.643087   22327 main.go:141] libmachine: (ha-113226-m03) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:43:18.862514   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:18.862411   23334 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa...
	I0421 18:43:19.041531   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:19.041385   23334 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/ha-113226-m03.rawdisk...
	I0421 18:43:19.041571   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Writing magic tar header
	I0421 18:43:19.041587   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Writing SSH key tar header
	I0421 18:43:19.041604   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:19.041529   23334 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03 ...
	I0421 18:43:19.041695   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03
	I0421 18:43:19.041732   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:43:19.041747   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03 (perms=drwx------)
	I0421 18:43:19.041759   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:43:19.041772   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:43:19.041791   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:43:19.041811   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:43:19.041826   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:43:19.041848   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:43:19.041868   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:43:19.041884   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:43:19.041901   22327 main.go:141] libmachine: (ha-113226-m03) Creating domain...
	I0421 18:43:19.041917   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:43:19.041927   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home
	I0421 18:43:19.041958   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Skipping /home - not owner
	I0421 18:43:19.042749   22327 main.go:141] libmachine: (ha-113226-m03) define libvirt domain using xml: 
	I0421 18:43:19.042770   22327 main.go:141] libmachine: (ha-113226-m03) <domain type='kvm'>
	I0421 18:43:19.042782   22327 main.go:141] libmachine: (ha-113226-m03)   <name>ha-113226-m03</name>
	I0421 18:43:19.042790   22327 main.go:141] libmachine: (ha-113226-m03)   <memory unit='MiB'>2200</memory>
	I0421 18:43:19.042799   22327 main.go:141] libmachine: (ha-113226-m03)   <vcpu>2</vcpu>
	I0421 18:43:19.042813   22327 main.go:141] libmachine: (ha-113226-m03)   <features>
	I0421 18:43:19.042821   22327 main.go:141] libmachine: (ha-113226-m03)     <acpi/>
	I0421 18:43:19.042829   22327 main.go:141] libmachine: (ha-113226-m03)     <apic/>
	I0421 18:43:19.042842   22327 main.go:141] libmachine: (ha-113226-m03)     <pae/>
	I0421 18:43:19.042857   22327 main.go:141] libmachine: (ha-113226-m03)     
	I0421 18:43:19.042870   22327 main.go:141] libmachine: (ha-113226-m03)   </features>
	I0421 18:43:19.042883   22327 main.go:141] libmachine: (ha-113226-m03)   <cpu mode='host-passthrough'>
	I0421 18:43:19.042895   22327 main.go:141] libmachine: (ha-113226-m03)   
	I0421 18:43:19.042908   22327 main.go:141] libmachine: (ha-113226-m03)   </cpu>
	I0421 18:43:19.042920   22327 main.go:141] libmachine: (ha-113226-m03)   <os>
	I0421 18:43:19.042938   22327 main.go:141] libmachine: (ha-113226-m03)     <type>hvm</type>
	I0421 18:43:19.042950   22327 main.go:141] libmachine: (ha-113226-m03)     <boot dev='cdrom'/>
	I0421 18:43:19.042963   22327 main.go:141] libmachine: (ha-113226-m03)     <boot dev='hd'/>
	I0421 18:43:19.042976   22327 main.go:141] libmachine: (ha-113226-m03)     <bootmenu enable='no'/>
	I0421 18:43:19.042987   22327 main.go:141] libmachine: (ha-113226-m03)   </os>
	I0421 18:43:19.042999   22327 main.go:141] libmachine: (ha-113226-m03)   <devices>
	I0421 18:43:19.043016   22327 main.go:141] libmachine: (ha-113226-m03)     <disk type='file' device='cdrom'>
	I0421 18:43:19.043038   22327 main.go:141] libmachine: (ha-113226-m03)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/boot2docker.iso'/>
	I0421 18:43:19.043051   22327 main.go:141] libmachine: (ha-113226-m03)       <target dev='hdc' bus='scsi'/>
	I0421 18:43:19.043069   22327 main.go:141] libmachine: (ha-113226-m03)       <readonly/>
	I0421 18:43:19.043090   22327 main.go:141] libmachine: (ha-113226-m03)     </disk>
	I0421 18:43:19.043109   22327 main.go:141] libmachine: (ha-113226-m03)     <disk type='file' device='disk'>
	I0421 18:43:19.043123   22327 main.go:141] libmachine: (ha-113226-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:43:19.043138   22327 main.go:141] libmachine: (ha-113226-m03)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/ha-113226-m03.rawdisk'/>
	I0421 18:43:19.043151   22327 main.go:141] libmachine: (ha-113226-m03)       <target dev='hda' bus='virtio'/>
	I0421 18:43:19.043167   22327 main.go:141] libmachine: (ha-113226-m03)     </disk>
	I0421 18:43:19.043182   22327 main.go:141] libmachine: (ha-113226-m03)     <interface type='network'>
	I0421 18:43:19.043191   22327 main.go:141] libmachine: (ha-113226-m03)       <source network='mk-ha-113226'/>
	I0421 18:43:19.043198   22327 main.go:141] libmachine: (ha-113226-m03)       <model type='virtio'/>
	I0421 18:43:19.043203   22327 main.go:141] libmachine: (ha-113226-m03)     </interface>
	I0421 18:43:19.043211   22327 main.go:141] libmachine: (ha-113226-m03)     <interface type='network'>
	I0421 18:43:19.043222   22327 main.go:141] libmachine: (ha-113226-m03)       <source network='default'/>
	I0421 18:43:19.043229   22327 main.go:141] libmachine: (ha-113226-m03)       <model type='virtio'/>
	I0421 18:43:19.043235   22327 main.go:141] libmachine: (ha-113226-m03)     </interface>
	I0421 18:43:19.043246   22327 main.go:141] libmachine: (ha-113226-m03)     <serial type='pty'>
	I0421 18:43:19.043252   22327 main.go:141] libmachine: (ha-113226-m03)       <target port='0'/>
	I0421 18:43:19.043262   22327 main.go:141] libmachine: (ha-113226-m03)     </serial>
	I0421 18:43:19.043281   22327 main.go:141] libmachine: (ha-113226-m03)     <console type='pty'>
	I0421 18:43:19.043302   22327 main.go:141] libmachine: (ha-113226-m03)       <target type='serial' port='0'/>
	I0421 18:43:19.043324   22327 main.go:141] libmachine: (ha-113226-m03)     </console>
	I0421 18:43:19.043333   22327 main.go:141] libmachine: (ha-113226-m03)     <rng model='virtio'>
	I0421 18:43:19.043344   22327 main.go:141] libmachine: (ha-113226-m03)       <backend model='random'>/dev/random</backend>
	I0421 18:43:19.043352   22327 main.go:141] libmachine: (ha-113226-m03)     </rng>
	I0421 18:43:19.043361   22327 main.go:141] libmachine: (ha-113226-m03)     
	I0421 18:43:19.043369   22327 main.go:141] libmachine: (ha-113226-m03)     
	I0421 18:43:19.043382   22327 main.go:141] libmachine: (ha-113226-m03)   </devices>
	I0421 18:43:19.043396   22327 main.go:141] libmachine: (ha-113226-m03) </domain>
	I0421 18:43:19.043410   22327 main.go:141] libmachine: (ha-113226-m03) 
	I0421 18:43:19.050231   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:bb:88:d2 in network default
	I0421 18:43:19.050893   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:19.050922   22327 main.go:141] libmachine: (ha-113226-m03) Ensuring networks are active...
	I0421 18:43:19.051681   22327 main.go:141] libmachine: (ha-113226-m03) Ensuring network default is active
	I0421 18:43:19.052028   22327 main.go:141] libmachine: (ha-113226-m03) Ensuring network mk-ha-113226 is active
	I0421 18:43:19.052513   22327 main.go:141] libmachine: (ha-113226-m03) Getting domain xml...
	I0421 18:43:19.053201   22327 main.go:141] libmachine: (ha-113226-m03) Creating domain...
	I0421 18:43:20.282657   22327 main.go:141] libmachine: (ha-113226-m03) Waiting to get IP...
	I0421 18:43:20.283405   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:20.283765   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:20.283799   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:20.283754   23334 retry.go:31] will retry after 263.965209ms: waiting for machine to come up
	I0421 18:43:20.549193   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:20.549586   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:20.549612   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:20.549548   23334 retry.go:31] will retry after 307.648351ms: waiting for machine to come up
	I0421 18:43:20.858779   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:20.859186   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:20.859208   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:20.859147   23334 retry.go:31] will retry after 478.221684ms: waiting for machine to come up
	I0421 18:43:21.338809   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:21.339242   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:21.339264   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:21.339199   23334 retry.go:31] will retry after 454.481902ms: waiting for machine to come up
	I0421 18:43:21.794928   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:21.795348   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:21.795379   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:21.795316   23334 retry.go:31] will retry after 659.132545ms: waiting for machine to come up
	I0421 18:43:22.456306   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:22.456865   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:22.456889   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:22.456832   23334 retry.go:31] will retry after 627.99293ms: waiting for machine to come up
	I0421 18:43:23.086265   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:23.086778   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:23.086807   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:23.086727   23334 retry.go:31] will retry after 949.480394ms: waiting for machine to come up
	I0421 18:43:24.038224   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:24.038692   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:24.038717   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:24.038652   23334 retry.go:31] will retry after 1.382407958s: waiting for machine to come up
	I0421 18:43:25.423095   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:25.423529   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:25.423558   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:25.423494   23334 retry.go:31] will retry after 1.171639093s: waiting for machine to come up
	I0421 18:43:26.596533   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:26.596951   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:26.596994   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:26.596935   23334 retry.go:31] will retry after 2.17194928s: waiting for machine to come up
	I0421 18:43:28.770642   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:28.771108   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:28.771130   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:28.771055   23334 retry.go:31] will retry after 2.597239918s: waiting for machine to come up
	I0421 18:43:31.371688   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:31.372148   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:31.372185   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:31.372084   23334 retry.go:31] will retry after 2.290553278s: waiting for machine to come up
	I0421 18:43:33.664411   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:33.664824   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:33.664857   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:33.664778   23334 retry.go:31] will retry after 3.791671556s: waiting for machine to come up
	I0421 18:43:37.459069   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:37.459525   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:37.459554   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:37.459485   23334 retry.go:31] will retry after 3.846723062s: waiting for machine to come up
	I0421 18:43:41.307401   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.307927   22327 main.go:141] libmachine: (ha-113226-m03) Found IP for machine: 192.168.39.221
	I0421 18:43:41.307967   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has current primary IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.307983   22327 main.go:141] libmachine: (ha-113226-m03) Reserving static IP address...
	I0421 18:43:41.308381   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find host DHCP lease matching {name: "ha-113226-m03", mac: "52:54:00:f7:32:68", ip: "192.168.39.221"} in network mk-ha-113226
	I0421 18:43:41.385983   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Getting to WaitForSSH function...
	I0421 18:43:41.386018   22327 main.go:141] libmachine: (ha-113226-m03) Reserved static IP address: 192.168.39.221
	I0421 18:43:41.386032   22327 main.go:141] libmachine: (ha-113226-m03) Waiting for SSH to be available...
	I0421 18:43:41.388666   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.389103   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.389134   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.389284   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Using SSH client type: external
	I0421 18:43:41.389311   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa (-rw-------)
	I0421 18:43:41.389345   22327 main.go:141] libmachine: (ha-113226-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:43:41.389359   22327 main.go:141] libmachine: (ha-113226-m03) DBG | About to run SSH command:
	I0421 18:43:41.389376   22327 main.go:141] libmachine: (ha-113226-m03) DBG | exit 0
	I0421 18:43:41.522248   22327 main.go:141] libmachine: (ha-113226-m03) DBG | SSH cmd err, output: <nil>: 
	I0421 18:43:41.522522   22327 main.go:141] libmachine: (ha-113226-m03) KVM machine creation complete!
	I0421 18:43:41.522825   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetConfigRaw
	I0421 18:43:41.523348   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:41.523558   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:41.523747   22327 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:43:41.523767   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:43:41.525063   22327 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:43:41.525075   22327 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:43:41.525080   22327 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:43:41.525086   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.527574   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.528023   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.528052   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.528226   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.528398   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.528570   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.528716   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.528890   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.529075   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.529086   22327 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:43:41.633923   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:43:41.633949   22327 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:43:41.633959   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.636679   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.637099   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.637134   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.637304   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.637530   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.637720   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.637851   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.638001   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.638235   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.638254   22327 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:43:41.747401   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:43:41.747461   22327 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:43:41.747467   22327 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:43:41.747474   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:41.747724   22327 buildroot.go:166] provisioning hostname "ha-113226-m03"
	I0421 18:43:41.747753   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:41.747907   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.750396   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.750782   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.750810   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.750955   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.751129   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.751296   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.751435   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.751598   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.751775   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.751792   22327 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226-m03 && echo "ha-113226-m03" | sudo tee /etc/hostname
	I0421 18:43:41.871320   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226-m03
	
	I0421 18:43:41.871347   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.874096   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.874440   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.874471   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.874679   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.874906   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.875113   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.875287   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.875492   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.875712   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.875741   22327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:43:42.002388   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:43:42.002422   22327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:43:42.002442   22327 buildroot.go:174] setting up certificates
	I0421 18:43:42.002453   22327 provision.go:84] configureAuth start
	I0421 18:43:42.002465   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:42.002691   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:42.005576   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.006028   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.006049   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.006256   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.008704   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.009114   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.009159   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.009338   22327 provision.go:143] copyHostCerts
	I0421 18:43:42.009373   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:43:42.009402   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:43:42.009409   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:43:42.009471   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:43:42.009542   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:43:42.009560   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:43:42.009567   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:43:42.009590   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:43:42.009630   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:43:42.009645   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:43:42.009652   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:43:42.009671   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:43:42.009718   22327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226-m03 san=[127.0.0.1 192.168.39.221 ha-113226-m03 localhost minikube]
	I0421 18:43:42.180379   22327 provision.go:177] copyRemoteCerts
	I0421 18:43:42.180433   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:43:42.180453   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.183320   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.183629   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.183661   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.183869   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.184065   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.184239   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.184369   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:42.269864   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:43:42.269938   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 18:43:42.299481   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:43:42.299551   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:43:42.329886   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:43:42.329960   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 18:43:42.360793   22327 provision.go:87] duration metric: took 358.329156ms to configureAuth
	I0421 18:43:42.360820   22327 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:43:42.361005   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:43:42.361069   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.364065   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.364454   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.364501   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.364695   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.364905   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.365070   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.365220   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.365399   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:42.365559   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:42.365575   22327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:43:42.652016   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:43:42.652050   22327 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:43:42.652060   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetURL
	I0421 18:43:42.653479   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Using libvirt version 6000000
	I0421 18:43:42.655459   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.655853   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.655878   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.656062   22327 main.go:141] libmachine: Docker is up and running!
	I0421 18:43:42.656076   22327 main.go:141] libmachine: Reticulating splines...
	I0421 18:43:42.656083   22327 client.go:171] duration metric: took 24.015468696s to LocalClient.Create
	I0421 18:43:42.656109   22327 start.go:167] duration metric: took 24.015535075s to libmachine.API.Create "ha-113226"
	I0421 18:43:42.656118   22327 start.go:293] postStartSetup for "ha-113226-m03" (driver="kvm2")
	I0421 18:43:42.656127   22327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:43:42.656143   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.656382   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:43:42.656406   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.658613   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.658954   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.658979   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.659087   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.659251   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.659404   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.659533   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:42.741615   22327 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:43:42.746528   22327 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:43:42.746553   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:43:42.746630   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:43:42.746714   22327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:43:42.746724   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:43:42.746799   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:43:42.758627   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:43:42.788877   22327 start.go:296] duration metric: took 132.746102ms for postStartSetup
	I0421 18:43:42.788939   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetConfigRaw
	I0421 18:43:42.789498   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:42.792329   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.792825   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.792856   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.793127   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:43:42.793376   22327 start.go:128] duration metric: took 24.171035236s to createHost
	I0421 18:43:42.793404   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.795760   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.796167   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.796195   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.796300   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.796487   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.796619   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.796820   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.796984   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:42.797185   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:42.797196   22327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:43:42.903234   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713725022.872356349
	
	I0421 18:43:42.903256   22327 fix.go:216] guest clock: 1713725022.872356349
	I0421 18:43:42.903266   22327 fix.go:229] Guest: 2024-04-21 18:43:42.872356349 +0000 UTC Remote: 2024-04-21 18:43:42.793390396 +0000 UTC m=+211.490544853 (delta=78.965953ms)
	I0421 18:43:42.903285   22327 fix.go:200] guest clock delta is within tolerance: 78.965953ms
	I0421 18:43:42.903292   22327 start.go:83] releasing machines lock for "ha-113226-m03", held for 24.281100015s
	I0421 18:43:42.903311   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.903590   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:42.906430   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.906779   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.906811   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.908946   22327 out.go:177] * Found network options:
	I0421 18:43:42.910484   22327 out.go:177]   - NO_PROXY=192.168.39.60,192.168.39.233
	W0421 18:43:42.911946   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 18:43:42.911968   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:43:42.911980   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.912498   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.912713   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.912814   22327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:43:42.912853   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	W0421 18:43:42.912871   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 18:43:42.912895   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:43:42.912962   22327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:43:42.912984   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.915561   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.915771   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.915967   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.916010   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.916136   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.916156   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.916168   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.916344   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.916351   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.916531   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.916535   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.916682   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.916691   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:42.916792   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:43.166848   22327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:43:43.173710   22327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:43:43.173774   22327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:43:43.190997   22327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:43:43.191022   22327 start.go:494] detecting cgroup driver to use...
	I0421 18:43:43.191087   22327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:43:43.208131   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:43:43.223716   22327 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:43:43.223775   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:43:43.240229   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:43:43.256732   22327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:43:43.372542   22327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:43:43.549547   22327 docker.go:233] disabling docker service ...
	I0421 18:43:43.549626   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:43:43.575795   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:43:43.593248   22327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:43:43.735032   22327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:43:43.862216   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:43:43.878589   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:43:43.899876   22327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:43:43.899938   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.912507   22327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:43:43.912597   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.924983   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.937626   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.950230   22327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:43:43.963616   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.976179   22327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.997570   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:44.009883   22327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:43:44.020868   22327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:43:44.020948   22327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:43:44.036454   22327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:43:44.047454   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:43:44.177987   22327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:43:44.344056   22327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:43:44.344137   22327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:43:44.349818   22327 start.go:562] Will wait 60s for crictl version
	I0421 18:43:44.349874   22327 ssh_runner.go:195] Run: which crictl
	I0421 18:43:44.354558   22327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:43:44.402975   22327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:43:44.403064   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:43:44.434503   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:43:44.473837   22327 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:43:44.475236   22327 out.go:177]   - env NO_PROXY=192.168.39.60
	I0421 18:43:44.476671   22327 out.go:177]   - env NO_PROXY=192.168.39.60,192.168.39.233
	I0421 18:43:44.477908   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:44.480300   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:44.480620   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:44.480649   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:44.480784   22327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:43:44.485783   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:43:44.500820   22327 mustload.go:65] Loading cluster: ha-113226
	I0421 18:43:44.501042   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:43:44.501348   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:44.501387   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:44.516249   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0421 18:43:44.516709   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:44.517189   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:44.517210   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:44.517467   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:44.517624   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:43:44.519069   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:43:44.519342   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:44.519378   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:44.534194   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0421 18:43:44.534640   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:44.535053   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:44.535075   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:44.535381   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:44.535569   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:43:44.535728   22327 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.221
	I0421 18:43:44.535740   22327 certs.go:194] generating shared ca certs ...
	I0421 18:43:44.535764   22327 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:43:44.535902   22327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:43:44.535950   22327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:43:44.535962   22327 certs.go:256] generating profile certs ...
	I0421 18:43:44.536083   22327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:43:44.536110   22327 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593
	I0421 18:43:44.536130   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.233 192.168.39.221 192.168.39.254]
	I0421 18:43:44.643314   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593 ...
	I0421 18:43:44.643344   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593: {Name:mkb2f3103261430dd6185de67171ae27d3e41d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:43:44.643520   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593 ...
	I0421 18:43:44.643532   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593: {Name:mk42802e6d09fbf06761adc99c0883feaac0109f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:43:44.643605   22327 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:43:44.643733   22327 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:43:44.643856   22327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:43:44.643871   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:43:44.643882   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:43:44.643893   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:43:44.643906   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:43:44.643918   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:43:44.643930   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:43:44.643942   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:43:44.643954   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:43:44.644002   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:43:44.644028   22327 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:43:44.644037   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:43:44.644062   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:43:44.644089   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:43:44.644110   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:43:44.644146   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:43:44.644171   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:43:44.644185   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:44.644197   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:43:44.644228   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:43:44.647457   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:44.647873   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:43:44.647903   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:44.648057   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:43:44.648242   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:43:44.648401   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:43:44.648521   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:43:44.722414   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 18:43:44.728835   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 18:43:44.743616   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 18:43:44.748556   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 18:43:44.761600   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 18:43:44.769790   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 18:43:44.784551   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 18:43:44.789854   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 18:43:44.804017   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 18:43:44.809447   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 18:43:44.825449   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 18:43:44.830431   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0421 18:43:44.844752   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:43:44.873184   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:43:44.901115   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:43:44.928023   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:43:44.956946   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0421 18:43:44.983943   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:43:45.012358   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:43:45.042297   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:43:45.068975   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:43:45.098030   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:43:45.126506   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:43:45.155445   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 18:43:45.176909   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 18:43:45.197454   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 18:43:45.216822   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 18:43:45.237271   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 18:43:45.256858   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0421 18:43:45.276582   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 18:43:45.295927   22327 ssh_runner.go:195] Run: openssl version
	I0421 18:43:45.302391   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:43:45.316327   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:43:45.321837   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:43:45.321907   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:43:45.328402   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:43:45.342235   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:43:45.356656   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:43:45.362482   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:43:45.362547   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:43:45.369213   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:43:45.382986   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:43:45.396286   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:45.401757   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:45.401827   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:45.408560   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:43:45.423297   22327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:43:45.428693   22327 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:43:45.428750   22327 kubeadm.go:928] updating node {m03 192.168.39.221 8443 v1.30.0 crio true true} ...
	I0421 18:43:45.428828   22327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:43:45.428854   22327 kube-vip.go:111] generating kube-vip config ...
	I0421 18:43:45.428889   22327 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:43:45.450912   22327 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:43:45.450971   22327 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:43:45.451026   22327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:43:45.464213   22327 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 18:43:45.464285   22327 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 18:43:45.477932   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0421 18:43:45.477944   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0421 18:43:45.477965   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:43:45.477984   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:43:45.477933   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0421 18:43:45.478022   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:43:45.478041   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:43:45.478139   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:43:45.499279   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:43:45.499314   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 18:43:45.499345   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 18:43:45.499364   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 18:43:45.499387   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 18:43:45.499456   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:43:45.513899   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 18:43:45.513933   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 18:43:46.544283   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 18:43:46.554876   22327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 18:43:46.573639   22327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:43:46.592290   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 18:43:46.612620   22327 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:43:46.617244   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:43:46.631372   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:43:46.768336   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:43:46.799710   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:43:46.800152   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:46.800214   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:46.817359   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0421 18:43:46.817791   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:46.818377   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:46.818405   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:46.818774   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:46.818996   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:43:46.819154   22327 start.go:316] joinCluster: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:43:46.819272   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 18:43:46.819293   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:43:46.822362   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:46.822903   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:43:46.822929   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:46.823133   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:43:46.823326   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:43:46.823457   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:43:46.823638   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:43:46.999726   22327 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:43:46.999762   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiuvl6.fmcttkgnokee07jj --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m03 --control-plane --apiserver-advertise-address=192.168.39.221 --apiserver-bind-port=8443"
	I0421 18:44:11.071476   22327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiuvl6.fmcttkgnokee07jj --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m03 --control-plane --apiserver-advertise-address=192.168.39.221 --apiserver-bind-port=8443": (24.071686132s)
	I0421 18:44:11.071513   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 18:44:11.767641   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-113226-m03 minikube.k8s.io/updated_at=2024_04_21T18_44_11_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-113226 minikube.k8s.io/primary=false
	I0421 18:44:11.929445   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-113226-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 18:44:12.054158   22327 start.go:318] duration metric: took 25.234998723s to joinCluster
	I0421 18:44:12.054228   22327 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:44:12.056018   22327 out.go:177] * Verifying Kubernetes components...
	I0421 18:44:12.054640   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:44:12.058119   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:44:12.361693   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:44:12.431649   22327 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:44:12.431974   22327 kapi.go:59] client config for ha-113226: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 18:44:12.432051   22327 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0421 18:44:12.432352   22327 node_ready.go:35] waiting up to 6m0s for node "ha-113226-m03" to be "Ready" ...
	I0421 18:44:12.432433   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:12.432443   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:12.432454   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:12.432462   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:12.436251   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:12.932518   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:12.932551   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:12.932561   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:12.932568   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:12.936225   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:13.432792   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:13.432813   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:13.432821   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:13.432825   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:13.436817   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:13.933171   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:13.933199   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:13.933215   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:13.933222   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:13.937491   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:14.433366   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:14.433400   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:14.433421   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:14.433428   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:14.437622   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:14.438984   22327 node_ready.go:53] node "ha-113226-m03" has status "Ready":"False"
	I0421 18:44:14.933343   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:14.933365   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:14.933373   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:14.933377   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:14.937421   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:15.433575   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:15.433603   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:15.433615   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:15.433621   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:15.437080   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:15.933567   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:15.933597   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:15.933609   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:15.933614   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:15.937640   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:16.433081   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:16.433104   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:16.433113   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:16.433118   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:16.436457   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:16.932582   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:16.932621   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:16.932628   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:16.932633   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:16.936420   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:16.937432   22327 node_ready.go:53] node "ha-113226-m03" has status "Ready":"False"
	I0421 18:44:17.432839   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:17.432859   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:17.432867   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:17.432871   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:17.437286   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:17.932566   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:17.932586   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:17.932594   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:17.932597   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:17.936095   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:18.433341   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:18.433367   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:18.433378   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:18.433382   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:18.437205   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:18.932909   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:18.932934   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:18.932943   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:18.932949   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:18.936607   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.432617   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:19.432636   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.432643   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.432647   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.436425   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.436981   22327 node_ready.go:53] node "ha-113226-m03" has status "Ready":"False"
	I0421 18:44:19.933302   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:19.933328   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.933339   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.933348   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.942552   22327 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 18:44:19.945571   22327 node_ready.go:49] node "ha-113226-m03" has status "Ready":"True"
	I0421 18:44:19.945597   22327 node_ready.go:38] duration metric: took 7.513225345s for node "ha-113226-m03" to be "Ready" ...
	I0421 18:44:19.945608   22327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:44:19.945695   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:19.945709   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.945718   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.945723   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.952837   22327 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 18:44:19.959480   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.959547   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n8sbt
	I0421 18:44:19.959552   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.959560   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.959564   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.962025   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.962771   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:19.962790   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.962800   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.962804   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.965758   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.966583   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.966599   22327 pod_ready.go:81] duration metric: took 7.098468ms for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.966609   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.966655   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zhskp
	I0421 18:44:19.966662   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.966669   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.966677   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.970400   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.971188   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:19.971204   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.971214   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.971220   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.974194   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.974801   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.974818   22327 pod_ready.go:81] duration metric: took 8.203908ms for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.974827   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.974877   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226
	I0421 18:44:19.974886   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.974892   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.974896   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.977515   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.978120   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:19.978133   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.978140   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.978144   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.981261   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.981982   22327 pod_ready.go:92] pod "etcd-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.982000   22327 pod_ready.go:81] duration metric: took 7.165713ms for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.982013   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.982086   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:44:19.982096   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.982107   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.982112   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.984810   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.985491   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:19.985505   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.985511   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.985515   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.988496   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.989029   22327 pod_ready.go:92] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.989048   22327 pod_ready.go:81] duration metric: took 7.026733ms for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.989059   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:20.133355   22327 request.go:629] Waited for 144.227929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.133420   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.133426   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.133441   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.133454   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.137471   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:20.333548   22327 request.go:629] Waited for 195.282525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.333600   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.333605   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.333615   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.333620   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.337343   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:20.533819   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.533869   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.533882   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.533887   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.538505   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:20.733818   22327 request.go:629] Waited for 194.422606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.733920   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.733944   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.733954   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.733961   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.737480   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:20.990210   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.990229   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.990239   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.990245   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.993884   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.133399   22327 request.go:629] Waited for 138.234839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.133466   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.133471   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.133479   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.133484   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.137266   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.489966   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:21.489990   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.489999   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.490003   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.493587   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.533751   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.533795   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.533807   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.533812   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.537329   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.989861   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:21.989887   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.989899   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.989909   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.993796   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.994616   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.994633   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.994644   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.994649   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.997624   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:21.998318   22327 pod_ready.go:102] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 18:44:22.490076   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:22.490101   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.490109   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.490119   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.493771   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:22.494392   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:22.494408   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.494415   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.494419   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.497280   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:22.989879   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:22.989902   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.989913   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.989921   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.993750   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:22.994676   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:22.994693   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.994700   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.994704   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.997690   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:23.489283   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:23.489304   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.489313   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.489322   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.492851   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:23.493666   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:23.493681   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.493688   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.493691   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.496713   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:23.989930   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:23.989956   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.989966   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.989970   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.994038   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:23.994645   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:23.994661   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.994674   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.994678   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.997657   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:24.489587   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:24.489606   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.489614   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.489618   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.494001   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:24.495273   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:24.495287   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.495294   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.495298   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.498651   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:24.499235   22327 pod_ready.go:102] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 18:44:24.989479   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:24.989501   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.989509   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.989513   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.994556   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:44:24.995313   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:24.995329   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.995337   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.995342   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.998761   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:25.489739   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:25.489762   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.489773   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.489780   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.493243   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:25.494432   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:25.494446   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.494452   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.494466   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.497751   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:25.989969   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:25.989998   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.990010   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.990015   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.994140   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:25.994961   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:25.994981   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.994990   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.994995   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.997802   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:26.489608   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:26.489631   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.489640   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.489644   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.493502   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:26.494259   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:26.494275   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.494290   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.494297   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.497220   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:26.989519   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:26.989543   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.989554   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.989558   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.993454   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:26.994200   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:26.994215   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.994224   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.994232   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.997574   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:26.998181   22327 pod_ready.go:102] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 18:44:27.490038   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:27.490074   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.490087   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.490092   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.493968   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:27.494943   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:27.494970   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.494980   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.494987   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.497898   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:27.989844   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:27.989866   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.989873   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.989876   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.993937   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:27.994998   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:27.995016   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.995021   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.995025   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.997945   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:28.489895   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:28.489922   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.489933   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.489938   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.495192   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:44:28.496037   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:28.496053   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.496060   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.496064   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.500285   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.501157   22327 pod_ready.go:92] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.501184   22327 pod_ready.go:81] duration metric: took 8.512116199s for pod "etcd-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.501207   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.501290   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226
	I0421 18:44:28.501299   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.501309   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.501315   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.504839   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:28.505473   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:28.505491   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.505499   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.505505   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.508642   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:28.509296   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.509319   22327 pod_ready.go:81] duration metric: took 8.098376ms for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.509331   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.509404   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226-m02
	I0421 18:44:28.509415   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.509425   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.509431   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.513904   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.514841   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:28.514860   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.514876   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.514884   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.528780   22327 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 18:44:28.529530   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.529553   22327 pod_ready.go:81] duration metric: took 20.206887ms for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.529567   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.529641   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226-m03
	I0421 18:44:28.529657   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.529667   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.529674   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.533780   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.534539   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:28.534552   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.534560   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.534565   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.537173   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:28.537848   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.537865   22327 pod_ready.go:81] duration metric: took 8.290833ms for pod "kube-apiserver-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.537874   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.537940   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226
	I0421 18:44:28.537949   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.537955   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.537961   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.546267   22327 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 18:44:28.733368   22327 request.go:629] Waited for 186.183281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:28.733428   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:28.733439   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.733446   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.733452   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.737990   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.738659   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.738688   22327 pod_ready.go:81] duration metric: took 200.804444ms for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.738703   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.933540   22327 request.go:629] Waited for 194.748447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m02
	I0421 18:44:28.933612   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m02
	I0421 18:44:28.933620   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.933627   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.933633   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.937608   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:29.134152   22327 request.go:629] Waited for 195.312619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:29.134231   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:29.134240   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.134258   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.134267   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.137069   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:29.137738   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:29.137757   22327 pod_ready.go:81] duration metric: took 399.0412ms for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.137766   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.333759   22327 request.go:629] Waited for 195.930659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m03
	I0421 18:44:29.333834   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m03
	I0421 18:44:29.333841   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.333852   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.333863   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.337201   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:29.533627   22327 request.go:629] Waited for 195.356241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:29.533693   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:29.533699   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.533709   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.533719   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.537122   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:29.537952   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:29.537971   22327 pod_ready.go:81] duration metric: took 400.198289ms for pod "kube-controller-manager-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.537984   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.733539   22327 request.go:629] Waited for 195.499509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:44:29.733665   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:44:29.733685   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.733693   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.733699   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.738187   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:29.933611   22327 request.go:629] Waited for 194.353876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:29.933659   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:29.933663   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.933671   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.933694   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.937764   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:29.938451   22327 pod_ready.go:92] pod "kube-proxy-h75dp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:29.938467   22327 pod_ready.go:81] duration metric: took 400.477351ms for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.938490   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.133672   22327 request.go:629] Waited for 195.106299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:44:30.133719   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:44:30.133724   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.133732   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.133736   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.136644   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:30.333694   22327 request.go:629] Waited for 196.406156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:30.333775   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:30.333781   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.333797   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.333825   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.337379   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:30.338159   22327 pod_ready.go:92] pod "kube-proxy-nsv74" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:30.338178   22327 pod_ready.go:81] duration metric: took 399.676627ms for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.338188   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shlwr" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.533579   22327 request.go:629] Waited for 195.338039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shlwr
	I0421 18:44:30.533661   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shlwr
	I0421 18:44:30.533672   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.533683   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.533693   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.537213   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:30.733500   22327 request.go:629] Waited for 195.285993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:30.733590   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:30.733600   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.733608   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.733612   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.737270   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:30.737856   22327 pod_ready.go:92] pod "kube-proxy-shlwr" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:30.737875   22327 pod_ready.go:81] duration metric: took 399.679446ms for pod "kube-proxy-shlwr" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.737886   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.934093   22327 request.go:629] Waited for 196.112407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:44:30.934164   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:44:30.934183   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.934203   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.934211   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.937917   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.134234   22327 request.go:629] Waited for 195.370491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:31.134293   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:31.134298   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.134305   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.134308   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.139413   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:44:31.140426   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:31.140449   22327 pod_ready.go:81] duration metric: took 402.55556ms for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.140461   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.333766   22327 request.go:629] Waited for 193.242241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:44:31.333890   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:44:31.333901   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.333912   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.333922   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.337658   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.533857   22327 request.go:629] Waited for 195.427901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:31.533914   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:31.533921   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.533930   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.533935   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.537324   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.537864   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:31.537882   22327 pod_ready.go:81] duration metric: took 397.413345ms for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.537891   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.734018   22327 request.go:629] Waited for 196.06804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m03
	I0421 18:44:31.734124   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m03
	I0421 18:44:31.734137   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.734148   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.734158   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.738523   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:31.933786   22327 request.go:629] Waited for 194.567052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:31.933837   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:31.933842   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.933849   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.933854   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.937150   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.938235   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:31.938258   22327 pod_ready.go:81] duration metric: took 400.359928ms for pod "kube-scheduler-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.938274   22327 pod_ready.go:38] duration metric: took 11.992653557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:44:31.938304   22327 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:44:31.938378   22327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:44:31.955783   22327 api_server.go:72] duration metric: took 19.901521933s to wait for apiserver process to appear ...
	I0421 18:44:31.955808   22327 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:44:31.955845   22327 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0421 18:44:31.965302   22327 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0421 18:44:31.965388   22327 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0421 18:44:31.965400   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.965425   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.965436   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.966525   22327 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 18:44:31.966604   22327 api_server.go:141] control plane version: v1.30.0
	I0421 18:44:31.966622   22327 api_server.go:131] duration metric: took 10.807225ms to wait for apiserver health ...
	I0421 18:44:31.966632   22327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:44:32.134013   22327 request.go:629] Waited for 167.31001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.134122   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.134134   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.134141   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.134147   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.140412   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:44:32.147662   22327 system_pods.go:59] 24 kube-system pods found
	I0421 18:44:32.147685   22327 system_pods.go:61] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:44:32.147694   22327 system_pods.go:61] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:44:32.147698   22327 system_pods.go:61] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:44:32.147701   22327 system_pods.go:61] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:44:32.147704   22327 system_pods.go:61] "etcd-ha-113226-m03" [1df4d990-651f-489d-851e-025124e70edb] Running
	I0421 18:44:32.147710   22327 system_pods.go:61] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:44:32.147713   22327 system_pods.go:61] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:44:32.147716   22327 system_pods.go:61] "kindnet-rhmbs" [fe360217-fab8-4a62-ba7a-5e50131dbe19] Running
	I0421 18:44:32.147719   22327 system_pods.go:61] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:44:32.147722   22327 system_pods.go:61] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:44:32.147725   22327 system_pods.go:61] "kube-apiserver-ha-113226-m03" [5150fa0a-f4d2-4b1f-98b7-c1df0368547f] Running
	I0421 18:44:32.147733   22327 system_pods.go:61] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:44:32.147739   22327 system_pods.go:61] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:44:32.147742   22327 system_pods.go:61] "kube-controller-manager-ha-113226-m03" [5e23b988-465d-4ab7-9b63-b6b12797144f] Running
	I0421 18:44:32.147745   22327 system_pods.go:61] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:44:32.147748   22327 system_pods.go:61] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:44:32.147750   22327 system_pods.go:61] "kube-proxy-shlwr" [67a1811b-054e-4f00-9360-2fbe114b4d62] Running
	I0421 18:44:32.147753   22327 system_pods.go:61] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:44:32.147756   22327 system_pods.go:61] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:44:32.147759   22327 system_pods.go:61] "kube-scheduler-ha-113226-m03" [7b3d0da2-eec6-48c5-bd3b-76032498004a] Running
	I0421 18:44:32.147762   22327 system_pods.go:61] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:44:32.147764   22327 system_pods.go:61] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:44:32.147767   22327 system_pods.go:61] "kube-vip-ha-113226-m03" [6a55b958-1d3d-49a8-9ea2-3857e4e537a7] Running
	I0421 18:44:32.147769   22327 system_pods.go:61] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:44:32.147776   22327 system_pods.go:74] duration metric: took 181.135389ms to wait for pod list to return data ...
	I0421 18:44:32.147785   22327 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:44:32.334171   22327 request.go:629] Waited for 186.322052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:44:32.334219   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:44:32.334225   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.334232   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.334236   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.338123   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:32.338272   22327 default_sa.go:45] found service account: "default"
	I0421 18:44:32.338290   22327 default_sa.go:55] duration metric: took 190.49626ms for default service account to be created ...
	I0421 18:44:32.338303   22327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:44:32.534015   22327 request.go:629] Waited for 195.652986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.534114   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.534125   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.534132   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.534139   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.545543   22327 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 18:44:32.553872   22327 system_pods.go:86] 24 kube-system pods found
	I0421 18:44:32.553904   22327 system_pods.go:89] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:44:32.553910   22327 system_pods.go:89] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:44:32.553914   22327 system_pods.go:89] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:44:32.553918   22327 system_pods.go:89] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:44:32.553922   22327 system_pods.go:89] "etcd-ha-113226-m03" [1df4d990-651f-489d-851e-025124e70edb] Running
	I0421 18:44:32.553926   22327 system_pods.go:89] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:44:32.553931   22327 system_pods.go:89] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:44:32.553935   22327 system_pods.go:89] "kindnet-rhmbs" [fe360217-fab8-4a62-ba7a-5e50131dbe19] Running
	I0421 18:44:32.553940   22327 system_pods.go:89] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:44:32.553945   22327 system_pods.go:89] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:44:32.553950   22327 system_pods.go:89] "kube-apiserver-ha-113226-m03" [5150fa0a-f4d2-4b1f-98b7-c1df0368547f] Running
	I0421 18:44:32.553955   22327 system_pods.go:89] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:44:32.553960   22327 system_pods.go:89] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:44:32.553964   22327 system_pods.go:89] "kube-controller-manager-ha-113226-m03" [5e23b988-465d-4ab7-9b63-b6b12797144f] Running
	I0421 18:44:32.553970   22327 system_pods.go:89] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:44:32.553974   22327 system_pods.go:89] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:44:32.553979   22327 system_pods.go:89] "kube-proxy-shlwr" [67a1811b-054e-4f00-9360-2fbe114b4d62] Running
	I0421 18:44:32.553985   22327 system_pods.go:89] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:44:32.553989   22327 system_pods.go:89] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:44:32.553992   22327 system_pods.go:89] "kube-scheduler-ha-113226-m03" [7b3d0da2-eec6-48c5-bd3b-76032498004a] Running
	I0421 18:44:32.553999   22327 system_pods.go:89] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:44:32.554002   22327 system_pods.go:89] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:44:32.554008   22327 system_pods.go:89] "kube-vip-ha-113226-m03" [6a55b958-1d3d-49a8-9ea2-3857e4e537a7] Running
	I0421 18:44:32.554012   22327 system_pods.go:89] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:44:32.554021   22327 system_pods.go:126] duration metric: took 215.711595ms to wait for k8s-apps to be running ...
	I0421 18:44:32.554029   22327 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:44:32.554090   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:44:32.572228   22327 system_svc.go:56] duration metric: took 18.18626ms WaitForService to wait for kubelet
	I0421 18:44:32.572265   22327 kubeadm.go:576] duration metric: took 20.51800361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:44:32.572284   22327 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:44:32.733670   22327 request.go:629] Waited for 161.30523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0421 18:44:32.733757   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0421 18:44:32.733770   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.733781   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.733789   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.737464   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:32.738783   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:44:32.738804   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:44:32.738816   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:44:32.738822   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:44:32.738828   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:44:32.738833   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:44:32.738839   22327 node_conditions.go:105] duration metric: took 166.550488ms to run NodePressure ...
	I0421 18:44:32.738858   22327 start.go:240] waiting for startup goroutines ...
	I0421 18:44:32.738887   22327 start.go:254] writing updated cluster config ...
	I0421 18:44:32.739166   22327 ssh_runner.go:195] Run: rm -f paused
	I0421 18:44:32.788238   22327 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 18:44:32.790493   22327 out.go:177] * Done! kubectl is now configured to use "ha-113226" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.574965882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725284574941916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=367b6a80-c7d8-4872-a76e-5bfc930d695c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.575745419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=954e0d53-c375-410f-9e1d-6619217fe93f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.575798503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=954e0d53-c375-410f-9e1d-6619217fe93f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.576023175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=954e0d53-c375-410f-9e1d-6619217fe93f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.618383428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03fa7af0-4eb1-4138-8ad4-3aaa4dceeac5 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.618454247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03fa7af0-4eb1-4138-8ad4-3aaa4dceeac5 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.619454528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6002ca85-2572-4bf3-811e-c65c21a30eb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.619866635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725284619846435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6002ca85-2572-4bf3-811e-c65c21a30eb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.620623118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acc207e4-6349-4ec4-b56a-892e2612205f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.620781105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acc207e4-6349-4ec4-b56a-892e2612205f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.621221205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=acc207e4-6349-4ec4-b56a-892e2612205f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.664442741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=030638f7-c10f-484d-a548-a3c97cf7d933 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.664514185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=030638f7-c10f-484d-a548-a3c97cf7d933 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.666386072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ecc204f-8d44-4271-b11f-8aad2b4097c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.667497445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725284667464157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ecc204f-8d44-4271-b11f-8aad2b4097c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.671093515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa53fbf5-1211-4513-9202-224e2d388862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.671146887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa53fbf5-1211-4513-9202-224e2d388862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.671712806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa53fbf5-1211-4513-9202-224e2d388862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.733090433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e419983-af18-46b9-97b7-25583625a26b name=/runtime.v1.RuntimeService/Version
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.733262275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e419983-af18-46b9-97b7-25583625a26b name=/runtime.v1.RuntimeService/Version
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.734776586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d387924b-8a3c-49ea-b2cf-70a819db1831 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.735648460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725284735626393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d387924b-8a3c-49ea-b2cf-70a819db1831 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.736288779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58c543a4-c6ba-4fef-8854-434fe5df5452 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.736364403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58c543a4-c6ba-4fef-8854-434fe5df5452 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:48:04 ha-113226 crio[683]: time="2024-04-21 18:48:04.736592278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58c543a4-c6ba-4fef-8854-434fe5df5452 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f640c1c70ad       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   faa43bf489bc5       busybox-fc5497c4f-vvhg8
	7a81ee93000c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   34fd27c2e4881       storage-provisioner
	3e93f6b05d337       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   65ac1d3e43166       coredns-7db6d8ff4d-zhskp
	0b5d0ab414db7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   2607d8484c47e       coredns-7db6d8ff4d-n8sbt
	52318879bf160       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   3182dd9f53b28       kindnet-d7vgl
	7048fade386a1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   68e3a1db8a00b       kube-proxy-h75dp
	a95e4d8a09dd5       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   2fdc0249766bb       kube-vip-ha-113226
	6ebd07febd8dc       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   054e2ef640e47       kube-controller-manager-ha-113226
	51aef14398913       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   820c1a658f913       kube-apiserver-ha-113226
	9224faad5a972       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   6167071453e71       etcd-ha-113226
	e5498303bb3f9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   adb821c8b93f8       kube-scheduler-ha-113226
	
	
	==> coredns [0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3] <==
	[INFO] 10.244.2.2:36518 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001806972s
	[INFO] 10.244.0.4:38372 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001940066s
	[INFO] 10.244.1.2:42730 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000328172s
	[INFO] 10.244.1.2:47312 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173713s
	[INFO] 10.244.2.2:36986 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108899s
	[INFO] 10.244.2.2:36822 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003347s
	[INFO] 10.244.2.2:41452 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195319s
	[INFO] 10.244.2.2:60508 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074721s
	[INFO] 10.244.0.4:51454 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104908s
	[INFO] 10.244.0.4:57376 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109603s
	[INFO] 10.244.0.4:40827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078308s
	[INFO] 10.244.0.4:47256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153662s
	[INFO] 10.244.0.4:37424 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014403s
	[INFO] 10.244.0.4:57234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144257s
	[INFO] 10.244.1.2:51901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259177s
	[INFO] 10.244.1.2:44450 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123202s
	[INFO] 10.244.2.2:53556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169239s
	[INFO] 10.244.2.2:42828 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117966s
	[INFO] 10.244.2.2:51827 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137514s
	[INFO] 10.244.0.4:56918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047175s
	[INFO] 10.244.1.2:45608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118838s
	[INFO] 10.244.1.2:50713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284967s
	[INFO] 10.244.2.2:58426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000313356s
	[INFO] 10.244.2.2:39340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130525s
	[INFO] 10.244.0.4:58687 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094588s
	
	
	==> coredns [3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f] <==
	[INFO] 10.244.1.2:44904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149133s
	[INFO] 10.244.1.2:43332 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.07284561s
	[INFO] 10.244.1.2:42838 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013593215s
	[INFO] 10.244.1.2:60318 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215954s
	[INFO] 10.244.1.2:46296 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158635s
	[INFO] 10.244.1.2:41498 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146514s
	[INFO] 10.244.2.2:54180 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001245595s
	[INFO] 10.244.2.2:56702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118055s
	[INFO] 10.244.2.2:52049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103643s
	[INFO] 10.244.2.2:39892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013318s
	[INFO] 10.244.0.4:50393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617766s
	[INFO] 10.244.0.4:58125 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001163449s
	[INFO] 10.244.1.2:55583 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000370228s
	[INFO] 10.244.1.2:57237 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092539s
	[INFO] 10.244.2.2:42488 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129888s
	[INFO] 10.244.0.4:48460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104891s
	[INFO] 10.244.0.4:35562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112767s
	[INFO] 10.244.0.4:37396 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009448s
	[INFO] 10.244.1.2:40110 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268116s
	[INFO] 10.244.1.2:40165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166492s
	[INFO] 10.244.2.2:45365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014902s
	[INFO] 10.244.2.2:48282 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093124s
	[INFO] 10.244.0.4:43339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000357932s
	[INFO] 10.244.0.4:39537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086381s
	[INFO] 10.244.0.4:33649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093318s
	
	
	==> describe nodes <==
	Name:               ha-113226
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_40_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:40:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:48:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:41:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-113226
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f770328f068141e091b6c3dbf4a76488
	  System UUID:                f770328f-0681-41e0-91b6-c3dbf4a76488
	  Boot ID:                    bbf1e5be-35e8-4986-b694-bc173cac60e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vvhg8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-7db6d8ff4d-n8sbt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m59s
	  kube-system                 coredns-7db6d8ff4d-zhskp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m59s
	  kube-system                 etcd-ha-113226                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m10s
	  kube-system                 kindnet-d7vgl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m
	  kube-system                 kube-apiserver-ha-113226             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-controller-manager-ha-113226    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-proxy-h75dp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-scheduler-ha-113226             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-vip-ha-113226                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m57s  kube-proxy       
	  Normal  Starting                 7m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m10s  kubelet          Node ha-113226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s  kubelet          Node ha-113226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s  kubelet          Node ha-113226 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m     node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal  NodeReady                6m57s  kubelet          Node ha-113226 status is now: NodeReady
	  Normal  RegisteredNode           4m51s  node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal  RegisteredNode           3m39s  node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	
	
	Name:               ha-113226-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_42_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:42:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:45:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    ha-113226-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e96ca06000049ab994a1d4c31482f88
	  System UUID:                8e96ca06-0000-49ab-994a-1d4c31482f88
	  Boot ID:                    2000e4cc-71bf-4b10-8615-26011164ba86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-djlm5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-113226-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m8s
	  kube-system                 kindnet-4hx6j                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m9s
	  kube-system                 kube-apiserver-ha-113226-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-controller-manager-ha-113226-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-proxy-nsv74                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-scheduler-ha-113226-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-vip-ha-113226-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 5m4s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m10s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m9s (x8 over 5m10s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x8 over 5m10s)  kubelet          Node ha-113226-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s (x7 over 5m10s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m5s                  node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           4m51s                 node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           3m39s                 node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  NodeNotReady             105s                  node-controller  Node ha-113226-m02 status is now: NodeNotReady
	
	
	Name:               ha-113226-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_44_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:44:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:48:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ha-113226-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e527acdd3b3544d5b53bced4a1abdb9a
	  System UUID:                e527acdd-3b35-44d5-b53b-ced4a1abdb9a
	  Boot ID:                    e881e85f-0867-4709-bc9b-ff693580d870
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lccdt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-113226-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-rhmbs                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-113226-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ha-113226-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-shlwr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-113226-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-vip-ha-113226-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node ha-113226-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	
	
	Name:               ha-113226-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_45_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:45:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:47:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-113226-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1d55ce55d9e44738a42ed29cc9f1198
	  System UUID:                c1d55ce5-5d9e-4473-8a42-ed29cc9f1198
	  Boot ID:                    c7e0935e-75ea-414d-a3d7-b181d3048bca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jkd2l       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-6s6v7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x2 over 2m53s)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x2 over 2m53s)  kubelet          Node ha-113226-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x2 over 2m53s)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-113226-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr21 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053196] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042774] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.623428] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.542269] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.723086] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.114701] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.062085] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054859] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.200473] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.119933] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.314231] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.898172] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.066212] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.334925] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +1.112693] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.070346] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.082112] kauditd_printk_skb: 40 callbacks suppressed
	[Apr21 18:41] kauditd_printk_skb: 21 callbacks suppressed
	[Apr21 18:43] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c] <==
	{"level":"warn","ts":"2024-04-21T18:48:05.037705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.041989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.058747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.060044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.064411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.08169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.091647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.100068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.104803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.109131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.118653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.126341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.133747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.139641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.143854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.15173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.158543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.158789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.165363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.169124Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.172408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.179421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.185832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.193004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:48:05.258951Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:48:05 up 7 min,  0 users,  load average: 0.73, 0.64, 0.31
	Linux ha-113226 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e] <==
	I0421 18:47:28.572845       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:47:38.583646       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:47:38.583726       1 main.go:227] handling current node
	I0421 18:47:38.583750       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:47:38.583767       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:47:38.583881       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:47:38.583901       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:47:38.583960       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:47:38.583979       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:47:48.603297       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:47:48.603343       1 main.go:227] handling current node
	I0421 18:47:48.603355       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:47:48.603365       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:47:48.603607       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:47:48.603688       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:47:48.603840       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:47:48.603899       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:47:58.610554       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:47:58.610601       1 main.go:227] handling current node
	I0421 18:47:58.610613       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:47:58.610619       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:47:58.610730       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:47:58.610761       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:47:58.610813       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:47:58.610818       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01] <==
	Trace[1105283356]: ["GuaranteedUpdate etcd3" audit-id:4c61baa6-37ba-4c82-8451-55676d7fcd54,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 639ms (18:45:12.853)
	Trace[1105283356]:  ---"Txn call completed" 638ms (18:45:13.492)]
	Trace[1105283356]: [639.485202ms] [639.485202ms] END
	I0421 18:45:13.495373       1 trace.go:236] Trace[462650061]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:bfc4de9f-cf2f-4a10-94a4-f663fdd11177,client:127.0.0.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:limitranges,scope:namespace,url:/api/v1/namespaces/kube-system/limitranges,user-agent:kube-apiserver/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:LIST (21-Apr-2024 18:45:12.847) (total time: 647ms):
	Trace[462650061]: ["List(recursive=true) etcd3" audit-id:bfc4de9f-cf2f-4a10-94a4-f663fdd11177,key:/limitranges/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 647ms (18:45:12.847)]
	Trace[462650061]: [647.968492ms] [647.968492ms] END
	I0421 18:45:13.508688       1 trace.go:236] Trace[558803442]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:345e531a-cef1-4b11-be90-d2c06c0142b4,client:192.168.39.60,api-group:,api-version:v1,name:ha-113226-m04,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-113226-m04,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:node-controller,verb:PATCH (21-Apr-2024 18:45:12.839) (total time: 669ms):
	Trace[558803442]: ["GuaranteedUpdate etcd3" audit-id:345e531a-cef1-4b11-be90-d2c06c0142b4,key:/minions/ha-113226-m04,type:*core.Node,resource:nodes 666ms (18:45:12.841)
	Trace[558803442]:  ---"Txn call completed" 645ms (18:45:13.488)]
	Trace[558803442]: ---"About to apply patch" 645ms (18:45:13.488)
	Trace[558803442]: ---"Object stored in database" 17ms (18:45:13.508)
	Trace[558803442]: [669.383879ms] [669.383879ms] END
	I0421 18:45:13.523111       1 trace.go:236] Trace[4398074]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:594f09d9-5b61-46a4-bb86-e814284267d3,client:192.168.39.60,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (21-Apr-2024 18:45:12.846) (total time: 676ms):
	Trace[4398074]: [676.90749ms] [676.90749ms] END
	I0421 18:45:13.526820       1 trace.go:236] Trace[1728014601]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:5402b02f-96dd-4b74-8ace-52598b8b3784,client:192.168.39.60,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (21-Apr-2024 18:45:12.845) (total time: 681ms):
	Trace[1728014601]: [681.264916ms] [681.264916ms] END
	I0421 18:45:13.556389       1 trace.go:236] Trace[1542993190]: "Patch" accept:application/json, */*,audit-id:a340bfba-ec53-4267-9175-bedbdec833fe,client:192.168.39.20,api-group:,api-version:v1,name:ha-113226-m04,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-113226-m04,user-agent:kubeadm/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (21-Apr-2024 18:45:12.843) (total time: 713ms):
	Trace[1542993190]: ["GuaranteedUpdate etcd3" audit-id:a340bfba-ec53-4267-9175-bedbdec833fe,key:/minions/ha-113226-m04,type:*core.Node,resource:nodes 713ms (18:45:12.843)
	Trace[1542993190]:  ---"Txn call completed" 650ms (18:45:13.495)
	Trace[1542993190]:  ---"Txn call completed" 33ms (18:45:13.555)]
	Trace[1542993190]: ---"About to apply patch" 651ms (18:45:13.495)
	Trace[1542993190]: ---"About to apply patch" 21ms (18:45:13.519)
	Trace[1542993190]: ---"Object stored in database" 34ms (18:45:13.556)
	Trace[1542993190]: [713.190152ms] [713.190152ms] END
	W0421 18:46:01.893487       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.221 192.168.39.60]
	
	
	==> kube-controller-manager [6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639] <==
	I0421 18:44:34.208158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="249.253854ms"
	E0421 18:44:34.208434       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0421 18:44:34.233985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.385354ms"
	I0421 18:44:34.234126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.792µs"
	I0421 18:44:34.382775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.057398ms"
	I0421 18:44:34.382917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.789µs"
	I0421 18:44:35.900012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.407µs"
	I0421 18:44:35.973422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.604µs"
	I0421 18:44:37.417544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.16384ms"
	I0421 18:44:37.417720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.886µs"
	I0421 18:44:37.452430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.386746ms"
	I0421 18:44:37.452751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.854µs"
	I0421 18:44:37.495479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.92056ms"
	I0421 18:44:37.495642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.44µs"
	I0421 18:44:38.015065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.659092ms"
	I0421 18:44:38.015306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.071µs"
	E0421 18:45:12.381104       1 certificate_controller.go:146] Sync csr-chwql failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-chwql": the object has been modified; please apply your changes to the latest version and try again
	E0421 18:45:12.396861       1 certificate_controller.go:146] Sync csr-chwql failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-chwql": the object has been modified; please apply your changes to the latest version and try again
	I0421 18:45:12.833534       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-113226-m04\" does not exist"
	I0421 18:45:13.510552       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-113226-m04" podCIDRs=["10.244.3.0/24"]
	I0421 18:45:15.313582       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-113226-m04"
	I0421 18:45:23.057743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-113226-m04"
	I0421 18:46:20.339550       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-113226-m04"
	I0421 18:46:20.525993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.463367ms"
	I0421 18:46:20.526510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.029µs"
	
	
	==> kube-proxy [7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3] <==
	I0421 18:41:07.148284       1 server_linux.go:69] "Using iptables proxy"
	I0421 18:41:07.174666       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.60"]
	I0421 18:41:07.287305       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:41:07.287394       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:41:07.287423       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:41:07.290726       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:41:07.291117       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:41:07.291156       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:41:07.292264       1 config.go:192] "Starting service config controller"
	I0421 18:41:07.292303       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:41:07.292327       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:41:07.292330       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:41:07.295118       1 config.go:319] "Starting node config controller"
	I0421 18:41:07.295236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 18:41:07.392692       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 18:41:07.392777       1 shared_informer.go:320] Caches are synced for service config
	I0421 18:41:07.396378       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab] <==
	E0421 18:44:08.322751       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rhmbs\": pod kindnet-rhmbs is already assigned to node \"ha-113226-m03\"" pod="kube-system/kindnet-rhmbs"
	I0421 18:44:08.322807       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rhmbs" node="ha-113226-m03"
	E0421 18:44:08.409086       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5mwwd\": pod kube-proxy-5mwwd is already assigned to node \"ha-113226-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5mwwd" node="ha-113226-m03"
	E0421 18:44:08.409238       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 00a647d3-d960-4114-866a-cdf4a6902acd(kube-system/kube-proxy-5mwwd) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5mwwd"
	E0421 18:44:08.409346       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5mwwd\": pod kube-proxy-5mwwd is already assigned to node \"ha-113226-m03\"" pod="kube-system/kube-proxy-5mwwd"
	I0421 18:44:08.410620       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5mwwd" node="ha-113226-m03"
	E0421 18:44:08.419831       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-spcr9\": pod kindnet-spcr9 is already assigned to node \"ha-113226-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-spcr9" node="ha-113226-m03"
	E0421 18:44:08.419899       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 42f59e78-d5eb-4b88-8160-b6a5248be0f5(kube-system/kindnet-spcr9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-spcr9"
	E0421 18:44:08.419915       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-spcr9\": pod kindnet-spcr9 is already assigned to node \"ha-113226-m03\"" pod="kube-system/kindnet-spcr9"
	I0421 18:44:08.419929       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-spcr9" node="ha-113226-m03"
	E0421 18:44:33.679054       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-djlm5\": pod busybox-fc5497c4f-djlm5 is already assigned to node \"ha-113226-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-djlm5" node="ha-113226-m02"
	E0421 18:44:33.679148       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4dbff1e7-4533-4189-8b00-098307a11d0b(default/busybox-fc5497c4f-djlm5) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-djlm5"
	E0421 18:44:33.679256       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-djlm5\": pod busybox-fc5497c4f-djlm5 is already assigned to node \"ha-113226-m02\"" pod="default/busybox-fc5497c4f-djlm5"
	I0421 18:44:33.679280       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-djlm5" node="ha-113226-m02"
	E0421 18:45:13.577372       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6s6v7\": pod kube-proxy-6s6v7 is already assigned to node \"ha-113226-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6s6v7" node="ha-113226-m04"
	E0421 18:45:13.577702       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5e72592e-0d66-4c92-982d-53f1d5a19c87(kube-system/kube-proxy-6s6v7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6s6v7"
	E0421 18:45:13.579518       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6s6v7\": pod kube-proxy-6s6v7 is already assigned to node \"ha-113226-m04\"" pod="kube-system/kube-proxy-6s6v7"
	I0421 18:45:13.579622       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6s6v7" node="ha-113226-m04"
	E0421 18:45:13.635627       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mvlpk\": pod kindnet-mvlpk is already assigned to node \"ha-113226-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mvlpk" node="ha-113226-m04"
	E0421 18:45:13.635736       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mvlpk\": pod kindnet-mvlpk is already assigned to node \"ha-113226-m04\"" pod="kube-system/kindnet-mvlpk"
	I0421 18:45:13.635762       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mvlpk" node="ha-113226-m04"
	E0421 18:45:13.791313       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jtqnc\": pod kindnet-jtqnc is already assigned to node \"ha-113226-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jtqnc" node="ha-113226-m04"
	E0421 18:45:13.791389       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7099f771-deb3-4c65-bd3f-d8a91874d516(kube-system/kindnet-jtqnc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jtqnc"
	E0421 18:45:13.791405       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jtqnc\": pod kindnet-jtqnc is already assigned to node \"ha-113226-m04\"" pod="kube-system/kindnet-jtqnc"
	I0421 18:45:13.791423       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jtqnc" node="ha-113226-m04"
	
	
	==> kubelet <==
	Apr 21 18:43:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:43:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:44:33 ha-113226 kubelet[1377]: I0421 18:44:33.720106    1377 topology_manager.go:215] "Topology Admit Handler" podUID="eb008f69-72f1-4ab3-a77a-791783889db9" podNamespace="default" podName="busybox-fc5497c4f-vvhg8"
	Apr 21 18:44:33 ha-113226 kubelet[1377]: I0421 18:44:33.801618    1377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8s2t\" (UniqueName: \"kubernetes.io/projected/eb008f69-72f1-4ab3-a77a-791783889db9-kube-api-access-j8s2t\") pod \"busybox-fc5497c4f-vvhg8\" (UID: \"eb008f69-72f1-4ab3-a77a-791783889db9\") " pod="default/busybox-fc5497c4f-vvhg8"
	Apr 21 18:44:37 ha-113226 kubelet[1377]: I0421 18:44:37.968424    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-vvhg8" podStartSLOduration=2.425497034 podStartE2EDuration="4.968122144s" podCreationTimestamp="2024-04-21 18:44:33 +0000 UTC" firstStartedPulling="2024-04-21 18:44:34.31899373 +0000 UTC m=+218.584599088" lastFinishedPulling="2024-04-21 18:44:36.861619021 +0000 UTC m=+221.127224198" observedRunningTime="2024-04-21 18:44:37.967241458 +0000 UTC m=+222.232846631" watchObservedRunningTime="2024-04-21 18:44:37.968122144 +0000 UTC m=+222.233727331"
	Apr 21 18:44:55 ha-113226 kubelet[1377]: E0421 18:44:55.931737    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:44:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:44:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:44:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:44:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:45:55 ha-113226 kubelet[1377]: E0421 18:45:55.930012    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:45:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:45:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:45:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:45:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:46:55 ha-113226 kubelet[1377]: E0421 18:46:55.928551    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:46:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:46:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:46:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:46:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:47:55 ha-113226 kubelet[1377]: E0421 18:47:55.926967    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:47:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:47:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:47:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:47:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-113226 -n ha-113226
helpers_test.go:261: (dbg) Run:  kubectl --context ha-113226 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (62.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (3.202628782s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:48:09.862910   27281 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:48:09.863021   27281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:09.863032   27281 out.go:304] Setting ErrFile to fd 2...
	I0421 18:48:09.863036   27281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:09.863243   27281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:48:09.863409   27281 out.go:298] Setting JSON to false
	I0421 18:48:09.863433   27281 mustload.go:65] Loading cluster: ha-113226
	I0421 18:48:09.863555   27281 notify.go:220] Checking for updates...
	I0421 18:48:09.863801   27281 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:48:09.863820   27281 status.go:255] checking status of ha-113226 ...
	I0421 18:48:09.864286   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:09.864354   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:09.881719   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0421 18:48:09.882139   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:09.882623   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:09.882645   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:09.882925   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:09.883163   27281 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:48:09.884441   27281 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:48:09.884461   27281 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:09.884808   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:09.884842   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:09.900888   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
	I0421 18:48:09.901287   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:09.901754   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:09.901781   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:09.902090   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:09.902306   27281 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:48:09.904943   27281 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:09.905332   27281 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:09.905361   27281 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:09.905601   27281 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:09.905880   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:09.905909   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:09.919863   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0421 18:48:09.920305   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:09.920774   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:09.920793   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:09.921104   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:09.921282   27281 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:48:09.921436   27281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:09.921470   27281 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:48:09.923996   27281 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:09.924406   27281 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:09.924428   27281 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:09.924531   27281 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:48:09.924687   27281 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:48:09.924855   27281 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:48:09.924989   27281 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:48:10.009144   27281 ssh_runner.go:195] Run: systemctl --version
	I0421 18:48:10.017569   27281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:10.034451   27281 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:10.034483   27281 api_server.go:166] Checking apiserver status ...
	I0421 18:48:10.034519   27281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:10.049007   27281 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:48:10.059248   27281 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:10.059338   27281 ssh_runner.go:195] Run: ls
	I0421 18:48:10.065051   27281 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:10.076619   27281 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:10.076641   27281 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:48:10.076650   27281 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:10.076668   27281 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:48:10.076958   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:10.076995   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:10.092605   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0421 18:48:10.093064   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:10.093613   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:10.093637   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:10.093947   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:10.094128   27281 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:48:10.095549   27281 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:48:10.095569   27281 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:10.096334   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:10.096376   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:10.112734   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0421 18:48:10.113100   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:10.113517   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:10.113544   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:10.113917   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:10.114122   27281 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:48:10.117006   27281 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:10.117425   27281 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:10.117451   27281 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:10.117594   27281 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:10.117853   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:10.117901   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:10.131286   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0421 18:48:10.131648   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:10.132031   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:10.132051   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:10.132330   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:10.132494   27281 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:48:10.132650   27281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:10.132674   27281 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:48:10.135085   27281 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:10.135525   27281 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:10.135556   27281 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:10.135668   27281 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:48:10.135818   27281 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:48:10.135947   27281 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:48:10.136064   27281 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:12.642380   27281 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:12.642488   27281 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0421 18:48:12.642503   27281 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:12.642513   27281 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:48:12.642529   27281 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:12.642536   27281 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:12.642823   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:12.642867   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:12.658174   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0421 18:48:12.658654   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:12.659104   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:12.659124   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:12.659410   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:12.659603   27281 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:12.661065   27281 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:12.661081   27281 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:12.661372   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:12.661415   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:12.676688   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0421 18:48:12.677097   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:12.677479   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:12.677496   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:12.677790   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:12.677980   27281 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:12.680600   27281 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:12.681049   27281 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:12.681077   27281 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:12.681200   27281 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:12.681470   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:12.681501   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:12.696376   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0421 18:48:12.696750   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:12.697181   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:12.697200   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:12.697528   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:12.697739   27281 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:12.697902   27281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:12.697921   27281 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:12.700254   27281 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:12.700625   27281 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:12.700659   27281 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:12.700762   27281 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:12.700914   27281 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:12.701071   27281 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:12.701186   27281 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:12.782425   27281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:12.800286   27281 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:12.800310   27281 api_server.go:166] Checking apiserver status ...
	I0421 18:48:12.800340   27281 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:12.816584   27281 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:12.834661   27281 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:12.834717   27281 ssh_runner.go:195] Run: ls
	I0421 18:48:12.839694   27281 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:12.844598   27281 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:12.844620   27281 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:12.844628   27281 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:12.844642   27281 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:12.844927   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:12.844958   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:12.860902   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0421 18:48:12.861322   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:12.861886   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:12.861915   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:12.862306   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:12.862509   27281 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:12.864306   27281 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:12.864320   27281 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:12.864579   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:12.864611   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:12.880411   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0421 18:48:12.880802   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:12.881295   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:12.881317   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:12.881620   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:12.881807   27281 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:12.884503   27281 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:12.884988   27281 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:12.885021   27281 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:12.885176   27281 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:12.885576   27281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:12.885635   27281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:12.901621   27281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38517
	I0421 18:48:12.901997   27281 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:12.902566   27281 main.go:141] libmachine: Using API Version  1
	I0421 18:48:12.902594   27281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:12.903010   27281 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:12.903184   27281 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:12.903374   27281 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:12.903393   27281 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:12.905896   27281 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:12.906284   27281 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:12.906303   27281 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:12.906448   27281 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:12.906594   27281 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:12.906751   27281 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:12.906881   27281 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:12.994188   27281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:13.011441   27281 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (5.508983826s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:48:13.704978   27382 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:48:13.705078   27382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:13.705099   27382 out.go:304] Setting ErrFile to fd 2...
	I0421 18:48:13.705103   27382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:13.705293   27382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:48:13.705449   27382 out.go:298] Setting JSON to false
	I0421 18:48:13.705473   27382 mustload.go:65] Loading cluster: ha-113226
	I0421 18:48:13.705592   27382 notify.go:220] Checking for updates...
	I0421 18:48:13.705839   27382 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:48:13.705852   27382 status.go:255] checking status of ha-113226 ...
	I0421 18:48:13.706238   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:13.706290   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:13.722493   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0421 18:48:13.722919   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:13.723481   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:13.723508   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:13.723906   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:13.724083   27382 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:48:13.725789   27382 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:48:13.725811   27382 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:13.726071   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:13.726108   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:13.742291   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42485
	I0421 18:48:13.742678   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:13.743143   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:13.743168   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:13.743471   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:13.743682   27382 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:48:13.746652   27382 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:13.747101   27382 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:13.747124   27382 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:13.747243   27382 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:13.747670   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:13.747723   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:13.763557   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
	I0421 18:48:13.763964   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:13.764514   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:13.764555   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:13.764819   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:13.765004   27382 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:48:13.765175   27382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:13.765216   27382 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:48:13.767780   27382 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:13.768167   27382 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:13.768204   27382 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:13.768333   27382 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:48:13.768504   27382 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:48:13.768651   27382 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:48:13.768854   27382 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:48:13.858352   27382 ssh_runner.go:195] Run: systemctl --version
	I0421 18:48:13.866870   27382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:13.885857   27382 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:13.885885   27382 api_server.go:166] Checking apiserver status ...
	I0421 18:48:13.885920   27382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:13.909736   27382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:48:13.940203   27382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:13.940256   27382 ssh_runner.go:195] Run: ls
	I0421 18:48:13.945695   27382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:13.949830   27382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:13.949849   27382 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:48:13.949858   27382 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:13.949873   27382 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:48:13.950198   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:13.950240   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:13.966403   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37181
	I0421 18:48:13.967092   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:13.968322   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:13.968346   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:13.968703   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:13.968931   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:48:13.970350   27382 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:48:13.970364   27382 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:13.970630   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:13.970667   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:13.984476   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I0421 18:48:13.984817   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:13.985195   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:13.985219   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:13.985524   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:13.985686   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:48:13.988628   27382 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:13.989053   27382 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:13.989092   27382 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:13.989254   27382 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:13.989663   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:13.989708   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:14.005788   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0421 18:48:14.006257   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:14.006775   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:14.006808   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:14.007085   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:14.007238   27382 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:48:14.007410   27382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:14.007433   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:48:14.010208   27382 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:14.010630   27382 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:14.010658   27382 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:14.010773   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:48:14.010931   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:48:14.011085   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:48:14.011200   27382 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:15.714287   27382 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:15.714386   27382 retry.go:31] will retry after 245.412028ms: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:15.960844   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:48:15.963678   27382 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:15.964083   27382 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:15.964130   27382 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:15.964292   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:48:15.964458   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:48:15.964592   27382 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:48:15.964753   27382 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:18.786295   27382 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:18.786363   27382 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0421 18:48:18.786380   27382 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:18.786390   27382 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:48:18.786416   27382 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:18.786426   27382 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:18.786747   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:18.786793   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:18.802319   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0421 18:48:18.802716   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:18.803166   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:18.803188   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:18.803512   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:18.803720   27382 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:18.805337   27382 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:18.805355   27382 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:18.805655   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:18.805695   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:18.819943   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0421 18:48:18.820311   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:18.820784   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:18.820809   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:18.821155   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:18.821367   27382 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:18.824154   27382 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:18.824562   27382 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:18.824587   27382 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:18.824821   27382 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:18.825220   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:18.825266   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:18.840327   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0421 18:48:18.840774   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:18.841289   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:18.841315   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:18.841688   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:18.841888   27382 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:18.842098   27382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:18.842121   27382 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:18.844722   27382 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:18.845127   27382 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:18.845154   27382 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:18.845306   27382 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:18.845454   27382 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:18.845619   27382 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:18.845746   27382 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:18.935293   27382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:18.956186   27382 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:18.956221   27382 api_server.go:166] Checking apiserver status ...
	I0421 18:48:18.956263   27382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:18.971171   27382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:18.982655   27382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:18.982718   27382 ssh_runner.go:195] Run: ls
	I0421 18:48:18.990097   27382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:18.995339   27382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:18.995361   27382 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:18.995372   27382 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:18.995390   27382 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:18.995675   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:18.995719   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:19.011605   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
	I0421 18:48:19.012021   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:19.012440   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:19.012460   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:19.012753   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:19.012930   27382 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:19.014466   27382 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:19.014487   27382 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:19.014782   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:19.014824   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:19.029636   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I0421 18:48:19.029982   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:19.030458   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:19.030484   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:19.030815   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:19.030994   27382 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:19.033741   27382 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:19.034147   27382 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:19.034182   27382 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:19.034288   27382 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:19.034542   27382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:19.034573   27382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:19.048547   27382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0421 18:48:19.048948   27382 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:19.049435   27382 main.go:141] libmachine: Using API Version  1
	I0421 18:48:19.049462   27382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:19.049750   27382 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:19.049924   27382 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:19.050146   27382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:19.050167   27382 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:19.052719   27382 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:19.053165   27382 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:19.053193   27382 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:19.053336   27382 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:19.053481   27382 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:19.053639   27382 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:19.053746   27382 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:19.142891   27382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:19.158790   27382 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (4.6181025s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:48:21.077235   27499 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:48:21.077358   27499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:21.077368   27499 out.go:304] Setting ErrFile to fd 2...
	I0421 18:48:21.077374   27499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:21.077579   27499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:48:21.077755   27499 out.go:298] Setting JSON to false
	I0421 18:48:21.077784   27499 mustload.go:65] Loading cluster: ha-113226
	I0421 18:48:21.077814   27499 notify.go:220] Checking for updates...
	I0421 18:48:21.078215   27499 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:48:21.078232   27499 status.go:255] checking status of ha-113226 ...
	I0421 18:48:21.078634   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:21.078701   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:21.095007   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0421 18:48:21.095617   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:21.096194   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:21.096219   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:21.096601   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:21.096860   27499 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:48:21.098322   27499 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:48:21.098340   27499 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:21.098616   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:21.098655   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:21.114568   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0421 18:48:21.114944   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:21.115483   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:21.115508   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:21.115837   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:21.116025   27499 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:48:21.118670   27499 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:21.119119   27499 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:21.119183   27499 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:21.119564   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:21.119599   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:21.119755   27499 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:21.133578   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0421 18:48:21.133932   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:21.134410   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:21.134432   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:21.134759   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:21.134943   27499 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:48:21.135145   27499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:21.135179   27499 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:48:21.138040   27499 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:21.138518   27499 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:21.138549   27499 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:21.138696   27499 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:48:21.138925   27499 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:48:21.139082   27499 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:48:21.139201   27499 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:48:21.222905   27499 ssh_runner.go:195] Run: systemctl --version
	I0421 18:48:21.230103   27499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:21.246668   27499 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:21.246699   27499 api_server.go:166] Checking apiserver status ...
	I0421 18:48:21.246738   27499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:21.262493   27499 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:48:21.273635   27499 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:21.273718   27499 ssh_runner.go:195] Run: ls
	I0421 18:48:21.279107   27499 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:21.283768   27499 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:21.283795   27499 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:48:21.283809   27499 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:21.283830   27499 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:48:21.284665   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:21.284704   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:21.299871   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0421 18:48:21.300248   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:21.300717   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:21.300739   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:21.301029   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:21.301209   27499 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:48:21.302859   27499 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:48:21.302875   27499 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:21.303158   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:21.303201   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:21.318086   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0421 18:48:21.318477   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:21.318934   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:21.318961   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:21.319287   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:21.319480   27499 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:48:21.322148   27499 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:21.322559   27499 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:21.322581   27499 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:21.322731   27499 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:21.323026   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:21.323060   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:21.338837   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0421 18:48:21.339178   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:21.339633   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:21.339674   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:21.340038   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:21.340222   27499 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:48:21.340389   27499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:21.340409   27499 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:48:21.343175   27499 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:21.343619   27499 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:21.343639   27499 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:21.343768   27499 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:48:21.343921   27499 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:48:21.344054   27499 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:48:21.344169   27499 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:21.858257   27499 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:21.858316   27499 retry.go:31] will retry after 352.674933ms: dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:25.282327   27499 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:25.282401   27499 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0421 18:48:25.282423   27499 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:25.282437   27499 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:48:25.282466   27499 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:25.282475   27499 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:25.282867   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:25.282915   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:25.299045   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34079
	I0421 18:48:25.299502   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:25.300087   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:25.300107   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:25.300488   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:25.300748   27499 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:25.302491   27499 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:25.302509   27499 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:25.302879   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:25.302932   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:25.317265   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0421 18:48:25.317616   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:25.318021   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:25.318039   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:25.318334   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:25.318525   27499 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:25.321164   27499 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:25.321564   27499 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:25.321586   27499 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:25.321739   27499 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:25.322027   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:25.322077   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:25.337344   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0421 18:48:25.337752   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:25.338219   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:25.338244   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:25.338508   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:25.338682   27499 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:25.338863   27499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:25.338895   27499 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:25.341549   27499 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:25.341961   27499 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:25.341992   27499 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:25.342134   27499 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:25.342278   27499 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:25.342398   27499 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:25.342539   27499 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:25.422861   27499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:25.439615   27499 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:25.439645   27499 api_server.go:166] Checking apiserver status ...
	I0421 18:48:25.439687   27499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:25.454193   27499 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:25.466677   27499 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:25.466745   27499 ssh_runner.go:195] Run: ls
	I0421 18:48:25.471889   27499 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:25.479612   27499 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:25.479643   27499 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:25.479653   27499 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:25.479671   27499 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:25.480001   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:25.480067   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:25.495462   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0421 18:48:25.495838   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:25.496266   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:25.496289   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:25.496593   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:25.496764   27499 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:25.498432   27499 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:25.498449   27499 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:25.498830   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:25.498876   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:25.513417   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I0421 18:48:25.513789   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:25.514239   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:25.514259   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:25.514682   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:25.514875   27499 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:25.517848   27499 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:25.518262   27499 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:25.518296   27499 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:25.518432   27499 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:25.518710   27499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:25.518743   27499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:25.533300   27499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0421 18:48:25.533696   27499 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:25.534213   27499 main.go:141] libmachine: Using API Version  1
	I0421 18:48:25.534234   27499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:25.534521   27499 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:25.534707   27499 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:25.534909   27499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:25.534934   27499 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:25.537609   27499 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:25.538049   27499 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:25.538094   27499 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:25.538259   27499 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:25.538411   27499 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:25.538547   27499 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:25.538675   27499 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:25.626229   27499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:25.640447   27499 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (4.5833081s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:48:27.386658   27599 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:48:27.386767   27599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:27.386778   27599 out.go:304] Setting ErrFile to fd 2...
	I0421 18:48:27.386782   27599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:27.386996   27599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:48:27.387195   27599 out.go:298] Setting JSON to false
	I0421 18:48:27.387223   27599 mustload.go:65] Loading cluster: ha-113226
	I0421 18:48:27.387271   27599 notify.go:220] Checking for updates...
	I0421 18:48:27.387597   27599 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:48:27.387610   27599 status.go:255] checking status of ha-113226 ...
	I0421 18:48:27.388005   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:27.388079   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:27.403932   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0421 18:48:27.404339   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:27.404951   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:27.404989   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:27.405285   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:27.405493   27599 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:48:27.407194   27599 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:48:27.407219   27599 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:27.407620   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:27.407676   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:27.422621   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I0421 18:48:27.423026   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:27.423504   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:27.423519   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:27.423754   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:27.423927   27599 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:48:27.426639   27599 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:27.427057   27599 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:27.427091   27599 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:27.427225   27599 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:27.427609   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:27.427663   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:27.441350   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0421 18:48:27.441707   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:27.442169   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:27.442190   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:27.442468   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:27.442641   27599 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:48:27.442818   27599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:27.442843   27599 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:48:27.445855   27599 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:27.446115   27599 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:27.446146   27599 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:27.446329   27599 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:48:27.446497   27599 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:48:27.446683   27599 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:48:27.446839   27599 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:48:27.531153   27599 ssh_runner.go:195] Run: systemctl --version
	I0421 18:48:27.538593   27599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:27.557321   27599 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:27.557346   27599 api_server.go:166] Checking apiserver status ...
	I0421 18:48:27.557380   27599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:27.574713   27599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:48:27.587188   27599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:27.587268   27599 ssh_runner.go:195] Run: ls
	I0421 18:48:27.594373   27599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:27.598588   27599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:27.598611   27599 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:48:27.598624   27599 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:27.598645   27599 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:48:27.599410   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:27.599470   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:27.614412   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
	I0421 18:48:27.614754   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:27.615189   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:27.615210   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:27.615497   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:27.615663   27599 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:48:27.616958   27599 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:48:27.616970   27599 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:27.617330   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:27.617367   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:27.631963   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0421 18:48:27.632353   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:27.632805   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:27.632827   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:27.633104   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:27.633259   27599 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:48:27.635893   27599 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:27.636353   27599 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:27.636380   27599 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:27.636496   27599 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:27.636764   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:27.636795   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:27.650843   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0421 18:48:27.651180   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:27.651562   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:27.651589   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:27.651883   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:27.652072   27599 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:48:27.652249   27599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:27.652268   27599 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:48:27.654761   27599 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:27.655266   27599 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:27.655295   27599 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:27.655430   27599 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:48:27.655603   27599 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:48:27.655750   27599 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:48:27.655885   27599 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:28.358342   27599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:28.358403   27599 retry.go:31] will retry after 129.38553ms: dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:31.554359   27599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:31.554445   27599 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0421 18:48:31.554466   27599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:31.554475   27599 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:48:31.554503   27599 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:31.554512   27599 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:31.554871   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:31.554913   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:31.569183   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0421 18:48:31.569637   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:31.570137   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:31.570160   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:31.570479   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:31.570642   27599 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:31.572268   27599 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:31.572283   27599 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:31.572561   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:31.572615   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:31.587846   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0421 18:48:31.588225   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:31.588767   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:31.588789   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:31.589078   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:31.589289   27599 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:31.592082   27599 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:31.592566   27599 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:31.592593   27599 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:31.592773   27599 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:31.593172   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:31.593209   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:31.607386   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0421 18:48:31.607820   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:31.608283   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:31.608306   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:31.608590   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:31.608766   27599 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:31.608949   27599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:31.608974   27599 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:31.611845   27599 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:31.612356   27599 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:31.612395   27599 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:31.612537   27599 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:31.612685   27599 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:31.612833   27599 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:31.612959   27599 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:31.695858   27599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:31.713758   27599 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:31.713787   27599 api_server.go:166] Checking apiserver status ...
	I0421 18:48:31.713836   27599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:31.729065   27599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:31.740021   27599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:31.740068   27599 ssh_runner.go:195] Run: ls
	I0421 18:48:31.745643   27599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:31.750507   27599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:31.750528   27599 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:31.750539   27599 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:31.750558   27599 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:31.750837   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:31.750878   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:31.765463   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0421 18:48:31.765898   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:31.766372   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:31.766398   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:31.766691   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:31.766868   27599 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:31.768449   27599 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:31.768462   27599 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:31.768762   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:31.768801   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:31.783103   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0421 18:48:31.783447   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:31.783829   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:31.783850   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:31.784140   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:31.784287   27599 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:31.786592   27599 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:31.787089   27599 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:31.787118   27599 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:31.787263   27599 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:31.787538   27599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:31.787569   27599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:31.802508   27599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0421 18:48:31.802873   27599 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:31.803209   27599 main.go:141] libmachine: Using API Version  1
	I0421 18:48:31.803227   27599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:31.803467   27599 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:31.803579   27599 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:31.803751   27599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:31.803772   27599 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:31.806304   27599 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:31.806764   27599 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:31.806786   27599 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:31.806969   27599 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:31.807125   27599 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:31.807230   27599 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:31.807346   27599 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:31.895424   27599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:31.912365   27599 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (4.685181373s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:48:33.688387   27701 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:48:33.688637   27701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:33.688646   27701 out.go:304] Setting ErrFile to fd 2...
	I0421 18:48:33.688651   27701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:33.688811   27701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:48:33.689028   27701 out.go:298] Setting JSON to false
	I0421 18:48:33.689056   27701 mustload.go:65] Loading cluster: ha-113226
	I0421 18:48:33.689158   27701 notify.go:220] Checking for updates...
	I0421 18:48:33.689484   27701 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:48:33.689499   27701 status.go:255] checking status of ha-113226 ...
	I0421 18:48:33.689846   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:33.689913   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:33.707106   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I0421 18:48:33.707501   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:33.708056   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:33.708103   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:33.708427   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:33.708634   27701 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:48:33.710269   27701 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:48:33.710285   27701 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:33.710578   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:33.710614   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:33.725300   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0421 18:48:33.725666   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:33.726099   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:33.726123   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:33.726437   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:33.726606   27701 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:48:33.729506   27701 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:33.729968   27701 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:33.729990   27701 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:33.730124   27701 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:33.730401   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:33.730439   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:33.745814   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0421 18:48:33.746232   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:33.746680   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:33.746701   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:33.747008   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:33.747210   27701 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:48:33.747395   27701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:33.747443   27701 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:48:33.750481   27701 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:33.750885   27701 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:33.750921   27701 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:33.751027   27701 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:48:33.751199   27701 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:48:33.751365   27701 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:48:33.751511   27701 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:48:33.831156   27701 ssh_runner.go:195] Run: systemctl --version
	I0421 18:48:33.838566   27701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:33.855494   27701 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:33.855521   27701 api_server.go:166] Checking apiserver status ...
	I0421 18:48:33.855562   27701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:33.873321   27701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:48:33.888444   27701 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:33.888504   27701 ssh_runner.go:195] Run: ls
	I0421 18:48:33.893955   27701 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:33.900311   27701 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:33.900337   27701 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:48:33.900347   27701 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:33.900365   27701 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:48:33.900717   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:33.900755   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:33.916116   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I0421 18:48:33.916546   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:33.917007   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:33.917033   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:33.917362   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:33.917561   27701 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:48:33.919173   27701 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:48:33.919190   27701 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:33.919469   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:33.919518   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:33.937149   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0421 18:48:33.937525   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:33.937985   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:33.938009   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:33.938336   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:33.938559   27701 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:48:33.941587   27701 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:33.942053   27701 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:33.942104   27701 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:33.942317   27701 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:33.942625   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:33.942670   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:33.957974   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45309
	I0421 18:48:33.958451   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:33.958896   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:33.958915   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:33.959202   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:33.959404   27701 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:48:33.959573   27701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:33.959595   27701 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:48:33.962412   27701 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:33.962803   27701 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:33.962832   27701 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:33.962961   27701 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:48:33.963110   27701 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:48:33.963237   27701 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:48:33.963339   27701 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:34.626283   27701 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:34.626340   27701 retry.go:31] will retry after 268.749117ms: dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:37.954315   27701 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:37.954394   27701 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0421 18:48:37.954435   27701 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:37.954447   27701 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:48:37.954464   27701 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:37.954474   27701 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:37.954800   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:37.954845   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:37.970485   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39837
	I0421 18:48:37.970894   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:37.971389   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:37.971411   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:37.971768   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:37.971956   27701 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:37.973535   27701 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:37.973550   27701 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:37.973834   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:37.973867   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:37.988466   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0421 18:48:37.988858   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:37.989283   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:37.989301   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:37.989598   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:37.989754   27701 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:37.992245   27701 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:37.992778   27701 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:37.992802   27701 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:37.992966   27701 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:37.993386   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:37.993432   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:38.008170   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0421 18:48:38.008515   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:38.008932   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:38.008950   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:38.009262   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:38.009429   27701 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:38.009601   27701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:38.009620   27701 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:38.012234   27701 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:38.012665   27701 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:38.012687   27701 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:38.012815   27701 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:38.012982   27701 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:38.013119   27701 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:38.013226   27701 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:38.100622   27701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:38.115495   27701 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:38.115523   27701 api_server.go:166] Checking apiserver status ...
	I0421 18:48:38.115558   27701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:38.129661   27701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:38.141440   27701 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:38.141485   27701 ssh_runner.go:195] Run: ls
	I0421 18:48:38.146229   27701 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:38.153493   27701 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:38.153512   27701 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:38.153523   27701 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:38.153542   27701 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:38.153892   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:38.153936   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:38.168908   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0421 18:48:38.169335   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:38.169880   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:38.169899   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:38.170247   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:38.170439   27701 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:38.172009   27701 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:38.172022   27701 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:38.172284   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:38.172338   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:38.188263   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0421 18:48:38.188659   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:38.189137   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:38.189157   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:38.189511   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:38.189701   27701 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:38.192792   27701 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:38.193251   27701 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:38.193281   27701 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:38.193441   27701 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:38.193760   27701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:38.193802   27701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:38.209271   27701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0421 18:48:38.209750   27701 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:38.210256   27701 main.go:141] libmachine: Using API Version  1
	I0421 18:48:38.210276   27701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:38.210594   27701 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:38.210781   27701 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:38.210946   27701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:38.210969   27701 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:38.213858   27701 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:38.214297   27701 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:38.214335   27701 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:38.214472   27701 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:38.214627   27701 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:38.214788   27701 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:38.214905   27701 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:38.302300   27701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:38.317544   27701 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (3.747551612s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:48:44.414842   27832 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:48:44.414964   27832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:44.414974   27832 out.go:304] Setting ErrFile to fd 2...
	I0421 18:48:44.414980   27832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:44.415181   27832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:48:44.415331   27832 out.go:298] Setting JSON to false
	I0421 18:48:44.415353   27832 mustload.go:65] Loading cluster: ha-113226
	I0421 18:48:44.415470   27832 notify.go:220] Checking for updates...
	I0421 18:48:44.415887   27832 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:48:44.415907   27832 status.go:255] checking status of ha-113226 ...
	I0421 18:48:44.416362   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:44.416408   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:44.436327   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0421 18:48:44.436724   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:44.437272   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:44.437295   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:44.437723   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:44.437961   27832 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:48:44.439870   27832 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:48:44.439887   27832 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:44.440289   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:44.440333   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:44.455168   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0421 18:48:44.455504   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:44.455931   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:44.455950   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:44.456228   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:44.456388   27832 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:48:44.459240   27832 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:44.459639   27832 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:44.459653   27832 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:44.459816   27832 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:44.460101   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:44.460130   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:44.474211   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0421 18:48:44.474561   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:44.474972   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:44.474989   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:44.475271   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:44.475418   27832 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:48:44.475610   27832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:44.475637   27832 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:48:44.478152   27832 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:44.478583   27832 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:44.478608   27832 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:44.478725   27832 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:48:44.478879   27832 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:48:44.479027   27832 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:48:44.479173   27832 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:48:44.563068   27832 ssh_runner.go:195] Run: systemctl --version
	I0421 18:48:44.570630   27832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:44.586801   27832 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:44.586828   27832 api_server.go:166] Checking apiserver status ...
	I0421 18:48:44.586870   27832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:44.602412   27832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:48:44.614382   27832 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:44.614432   27832 ssh_runner.go:195] Run: ls
	I0421 18:48:44.619697   27832 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:44.625733   27832 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:44.625751   27832 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:48:44.625761   27832 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:44.625779   27832 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:48:44.626103   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:44.626137   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:44.640665   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
	I0421 18:48:44.641048   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:44.641459   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:44.641477   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:44.641825   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:44.642018   27832 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:48:44.643482   27832 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:48:44.643498   27832 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:44.643748   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:44.643787   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:44.657699   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41993
	I0421 18:48:44.658124   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:44.658595   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:44.658621   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:44.658887   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:44.659049   27832 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:48:44.662097   27832 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:44.662560   27832 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:44.662596   27832 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:44.662720   27832 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:48:44.663127   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:44.663175   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:44.678268   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0421 18:48:44.678618   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:44.679008   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:44.679030   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:44.679368   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:44.679551   27832 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:48:44.679738   27832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:44.679759   27832 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:48:44.682311   27832 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:44.682745   27832 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:48:44.682776   27832 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:48:44.682909   27832 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:48:44.683080   27832 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:48:44.683231   27832 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:48:44.683366   27832 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	W0421 18:48:47.746287   27832 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.233:22: connect: no route to host
	W0421 18:48:47.746380   27832 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0421 18:48:47.746398   27832 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:47.746405   27832 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:48:47.746421   27832 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	I0421 18:48:47.746429   27832 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:47.746723   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:47.746759   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:47.762419   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0421 18:48:47.762881   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:47.763428   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:47.763452   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:47.763732   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:47.763912   27832 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:47.765621   27832 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:47.765637   27832 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:47.765912   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:47.765943   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:47.779893   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0421 18:48:47.780342   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:47.780855   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:47.780877   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:47.781142   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:47.781316   27832 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:47.784045   27832 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:47.784436   27832 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:47.784456   27832 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:47.784600   27832 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:47.784935   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:47.784972   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:47.799373   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0421 18:48:47.799700   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:47.800100   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:47.800120   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:47.800449   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:47.800702   27832 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:47.800886   27832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:47.800919   27832 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:47.803351   27832 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:47.803828   27832 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:47.803845   27832 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:47.804040   27832 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:47.804221   27832 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:47.804358   27832 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:47.804483   27832 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:47.887101   27832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:47.903770   27832 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:47.903800   27832 api_server.go:166] Checking apiserver status ...
	I0421 18:48:47.903842   27832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:47.918930   27832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:47.932053   27832 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:47.932105   27832 ssh_runner.go:195] Run: ls
	I0421 18:48:47.937566   27832 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:47.944157   27832 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:47.944178   27832 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:47.944189   27832 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:47.944206   27832 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:47.944530   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:47.944564   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:47.959185   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0421 18:48:47.959613   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:47.960157   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:47.960177   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:47.960512   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:47.960674   27832 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:47.962226   27832 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:47.962242   27832 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:47.962505   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:47.962537   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:47.977130   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41775
	I0421 18:48:47.977537   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:47.978040   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:47.978075   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:47.978458   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:47.978653   27832 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:47.981365   27832 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:47.981793   27832 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:47.981832   27832 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:47.981976   27832 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:47.982307   27832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:47.982342   27832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:47.997548   27832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I0421 18:48:47.997911   27832 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:47.998410   27832 main.go:141] libmachine: Using API Version  1
	I0421 18:48:47.998435   27832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:47.998707   27832 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:47.998908   27832 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:47.999050   27832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:47.999065   27832 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:48.001747   27832 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:48.002190   27832 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:48.002226   27832 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:48.002401   27832 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:48.002549   27832 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:48.002667   27832 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:48.002819   27832 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:48.090402   27832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:48.106472   27832 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 7 (693.845273ms)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:48:57.889506   27986 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:48:57.889608   27986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:57.889616   27986 out.go:304] Setting ErrFile to fd 2...
	I0421 18:48:57.889620   27986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:48:57.889810   27986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:48:57.889963   27986 out.go:298] Setting JSON to false
	I0421 18:48:57.889986   27986 mustload.go:65] Loading cluster: ha-113226
	I0421 18:48:57.890045   27986 notify.go:220] Checking for updates...
	I0421 18:48:57.890427   27986 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:48:57.890446   27986 status.go:255] checking status of ha-113226 ...
	I0421 18:48:57.890901   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:57.890955   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:57.908896   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0421 18:48:57.909296   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:57.909880   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:57.909904   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:57.910292   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:57.910465   27986 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:48:57.912267   27986 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:48:57.912291   27986 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:57.912577   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:57.912614   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:57.927285   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0421 18:48:57.927672   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:57.928111   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:57.928143   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:57.928470   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:57.928651   27986 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:48:57.931488   27986 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:57.931922   27986 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:57.931948   27986 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:57.932097   27986 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:48:57.932493   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:57.932535   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:57.947506   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0421 18:48:57.947942   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:57.948385   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:57.948406   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:57.948702   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:57.948898   27986 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:48:57.949092   27986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:57.949117   27986 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:48:57.951497   27986 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:57.951887   27986 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:48:57.951906   27986 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:48:57.952028   27986 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:48:57.952198   27986 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:48:57.952358   27986 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:48:57.952466   27986 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:48:58.041320   27986 ssh_runner.go:195] Run: systemctl --version
	I0421 18:48:58.049456   27986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:58.070679   27986 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:58.070722   27986 api_server.go:166] Checking apiserver status ...
	I0421 18:48:58.070786   27986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:58.089684   27986 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:48:58.113057   27986 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:58.113125   27986 ssh_runner.go:195] Run: ls
	I0421 18:48:58.118549   27986 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:58.123451   27986 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:58.123485   27986 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:48:58.123498   27986 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:58.123524   27986 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:48:58.123898   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:58.123945   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:58.139397   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0421 18:48:58.139820   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:58.140347   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:58.140374   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:58.140704   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:58.140904   27986 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:48:58.142526   27986 status.go:330] ha-113226-m02 host status = "Stopped" (err=<nil>)
	I0421 18:48:58.142541   27986 status.go:343] host is not running, skipping remaining checks
	I0421 18:48:58.142548   27986 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:58.142563   27986 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:48:58.142974   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:58.143022   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:58.157974   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39603
	I0421 18:48:58.158346   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:58.158823   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:58.158845   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:58.159172   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:58.159368   27986 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:48:58.160954   27986 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:48:58.160969   27986 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:58.161251   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:58.161284   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:58.176068   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0421 18:48:58.176552   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:58.177109   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:58.177132   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:58.177452   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:58.177641   27986 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:48:58.180133   27986 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:58.180503   27986 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:58.180532   27986 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:58.180674   27986 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:48:58.180986   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:58.181020   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:58.196413   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I0421 18:48:58.196780   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:58.197356   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:58.197377   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:58.197708   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:58.197940   27986 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:48:58.198158   27986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:58.198183   27986 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:48:58.200813   27986 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:58.201212   27986 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:48:58.201238   27986 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:48:58.201343   27986 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:48:58.201500   27986 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:48:58.201652   27986 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:48:58.201826   27986 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:48:58.289420   27986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:58.311449   27986 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:48:58.311478   27986 api_server.go:166] Checking apiserver status ...
	I0421 18:48:58.311518   27986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:48:58.334933   27986 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:48:58.347379   27986 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:48:58.347438   27986 ssh_runner.go:195] Run: ls
	I0421 18:48:58.352404   27986 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:48:58.357365   27986 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:48:58.357392   27986 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:48:58.357404   27986 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:48:58.357422   27986 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:48:58.357797   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:58.357832   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:58.373188   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35561
	I0421 18:48:58.373555   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:58.374014   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:58.374037   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:58.374449   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:58.374682   27986 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:48:58.376286   27986 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:48:58.376303   27986 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:58.376562   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:58.376593   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:58.391054   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0421 18:48:58.391443   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:58.391933   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:58.391958   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:58.392331   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:58.392557   27986 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:48:58.395690   27986 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:58.396076   27986 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:58.396107   27986 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:58.396267   27986 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:48:58.396554   27986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:48:58.396593   27986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:48:58.411217   27986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0421 18:48:58.411611   27986 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:48:58.412069   27986 main.go:141] libmachine: Using API Version  1
	I0421 18:48:58.412095   27986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:48:58.412468   27986 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:48:58.412668   27986 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:48:58.412859   27986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:48:58.412886   27986 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:48:58.415430   27986 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:58.415825   27986 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:48:58.415847   27986 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:48:58.416026   27986 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:48:58.416243   27986 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:48:58.416435   27986 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:48:58.416578   27986 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:48:58.507584   27986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:48:58.523927   27986 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0421 18:49:06.204766   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 7 (681.7472ms)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-113226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:49:08.821819   28090 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:49:08.821926   28090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:49:08.821935   28090 out.go:304] Setting ErrFile to fd 2...
	I0421 18:49:08.821939   28090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:49:08.822174   28090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:49:08.822376   28090 out.go:298] Setting JSON to false
	I0421 18:49:08.822402   28090 mustload.go:65] Loading cluster: ha-113226
	I0421 18:49:08.822469   28090 notify.go:220] Checking for updates...
	I0421 18:49:08.822758   28090 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:49:08.822770   28090 status.go:255] checking status of ha-113226 ...
	I0421 18:49:08.823121   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:08.823230   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:08.840608   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I0421 18:49:08.840987   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:08.841633   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:08.841654   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:08.841938   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:08.842128   28090 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:49:08.843674   28090 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:49:08.843697   28090 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:49:08.844098   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:08.844146   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:08.859109   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44165
	I0421 18:49:08.859545   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:08.859997   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:08.860024   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:08.860389   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:08.860601   28090 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:49:08.863447   28090 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:49:08.863961   28090 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:49:08.863995   28090 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:49:08.864074   28090 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:49:08.864375   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:08.864407   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:08.878900   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
	I0421 18:49:08.879323   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:08.879815   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:08.879835   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:08.880186   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:08.880398   28090 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:49:08.880640   28090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:49:08.880670   28090 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:49:08.883870   28090 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:49:08.884305   28090 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:49:08.884326   28090 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:49:08.884447   28090 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:49:08.884610   28090 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:49:08.884778   28090 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:49:08.884918   28090 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:49:08.976861   28090 ssh_runner.go:195] Run: systemctl --version
	I0421 18:49:08.986291   28090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:49:09.003546   28090 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:49:09.003572   28090 api_server.go:166] Checking apiserver status ...
	I0421 18:49:09.003602   28090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:49:09.020750   28090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0421 18:49:09.038246   28090 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:49:09.038297   28090 ssh_runner.go:195] Run: ls
	I0421 18:49:09.043832   28090 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:49:09.051213   28090 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:49:09.051247   28090 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:49:09.051259   28090 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:49:09.051282   28090 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:49:09.051576   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:09.051620   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:09.066640   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0421 18:49:09.067092   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:09.067587   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:09.067615   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:09.067981   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:09.068222   28090 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:49:09.069773   28090 status.go:330] ha-113226-m02 host status = "Stopped" (err=<nil>)
	I0421 18:49:09.069786   28090 status.go:343] host is not running, skipping remaining checks
	I0421 18:49:09.069791   28090 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:49:09.069810   28090 status.go:255] checking status of ha-113226-m03 ...
	I0421 18:49:09.070185   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:09.070243   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:09.085443   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0421 18:49:09.085806   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:09.086306   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:09.086327   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:09.086670   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:09.086839   28090 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:49:09.088607   28090 status.go:330] ha-113226-m03 host status = "Running" (err=<nil>)
	I0421 18:49:09.088622   28090 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:49:09.088894   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:09.088928   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:09.103573   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0421 18:49:09.103937   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:09.104408   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:09.104433   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:09.104755   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:09.104941   28090 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:49:09.107506   28090 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:49:09.107879   28090 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:49:09.107898   28090 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:49:09.108050   28090 host.go:66] Checking if "ha-113226-m03" exists ...
	I0421 18:49:09.108341   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:09.108386   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:09.122776   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0421 18:49:09.123246   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:09.123730   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:09.123755   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:09.124069   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:09.124242   28090 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:49:09.124440   28090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:49:09.124460   28090 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:49:09.127281   28090 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:49:09.127712   28090 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:49:09.127747   28090 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:49:09.127926   28090 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:49:09.128098   28090 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:49:09.128239   28090 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:49:09.128358   28090 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:49:09.211490   28090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:49:09.235360   28090 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:49:09.235385   28090 api_server.go:166] Checking apiserver status ...
	I0421 18:49:09.235415   28090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:49:09.253345   28090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0421 18:49:09.267181   28090 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:49:09.267239   28090 ssh_runner.go:195] Run: ls
	I0421 18:49:09.273125   28090 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:49:09.280169   28090 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:49:09.280204   28090 status.go:422] ha-113226-m03 apiserver status = Running (err=<nil>)
	I0421 18:49:09.280216   28090 status.go:257] ha-113226-m03 status: &{Name:ha-113226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:49:09.280235   28090 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:49:09.280693   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:09.280747   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:09.297696   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0421 18:49:09.298147   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:09.298660   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:09.298687   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:09.299044   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:09.299267   28090 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:49:09.300699   28090 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:49:09.300716   28090 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:49:09.300989   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:09.301024   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:09.316133   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34843
	I0421 18:49:09.316611   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:09.317059   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:09.317085   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:09.317392   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:09.317583   28090 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:49:09.320355   28090 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:49:09.320777   28090 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:49:09.320797   28090 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:49:09.320942   28090 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:49:09.321269   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:09.321310   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:09.336331   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I0421 18:49:09.336831   28090 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:09.337285   28090 main.go:141] libmachine: Using API Version  1
	I0421 18:49:09.337309   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:09.337600   28090 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:09.337767   28090 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:49:09.337962   28090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:49:09.337983   28090 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:49:09.340628   28090 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:49:09.341068   28090 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:49:09.341090   28090 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:49:09.341244   28090 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:49:09.341431   28090 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:49:09.341575   28090 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:49:09.341730   28090 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:49:09.431226   28090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:49:09.448247   28090 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-113226 -n ha-113226
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-113226 logs -n 25: (1.58210286s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226:/home/docker/cp-test_ha-113226-m03_ha-113226.txt                       |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226 sudo cat                                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226.txt                                 |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m02:/home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m04 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp testdata/cp-test.txt                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226:/home/docker/cp-test_ha-113226-m04_ha-113226.txt                       |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226 sudo cat                                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226.txt                                 |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m02:/home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03:/home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m03 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-113226 node stop m02 -v=7                                                     | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-113226 node start m02 -v=7                                                    | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:40:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:40:11.351426   22327 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:40:11.351551   22327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:40:11.351560   22327 out.go:304] Setting ErrFile to fd 2...
	I0421 18:40:11.351564   22327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:40:11.351730   22327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:40:11.352359   22327 out.go:298] Setting JSON to false
	I0421 18:40:11.353185   22327 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1309,"bootTime":1713723502,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:40:11.353252   22327 start.go:139] virtualization: kvm guest
	I0421 18:40:11.355621   22327 out.go:177] * [ha-113226] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:40:11.357129   22327 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:40:11.358411   22327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:40:11.357131   22327 notify.go:220] Checking for updates...
	I0421 18:40:11.361001   22327 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:40:11.362403   22327 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:40:11.363762   22327 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:40:11.365007   22327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:40:11.366390   22327 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:40:11.401544   22327 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 18:40:11.402902   22327 start.go:297] selected driver: kvm2
	I0421 18:40:11.402917   22327 start.go:901] validating driver "kvm2" against <nil>
	I0421 18:40:11.402936   22327 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:40:11.403588   22327 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:40:11.403667   22327 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:40:11.418878   22327 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:40:11.418949   22327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:40:11.419148   22327 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:40:11.419193   22327 cni.go:84] Creating CNI manager for ""
	I0421 18:40:11.419205   22327 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0421 18:40:11.419209   22327 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0421 18:40:11.419261   22327 start.go:340] cluster config:
	{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0421 18:40:11.419383   22327 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:40:11.422109   22327 out.go:177] * Starting "ha-113226" primary control-plane node in "ha-113226" cluster
	I0421 18:40:11.423272   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:40:11.423313   22327 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:40:11.423327   22327 cache.go:56] Caching tarball of preloaded images
	I0421 18:40:11.423409   22327 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:40:11.423421   22327 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:40:11.423718   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:40:11.423751   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json: {Name:mk8f2789a9447c7baf30689bce1ddb3bc9f26118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:11.423891   22327 start.go:360] acquireMachinesLock for ha-113226: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:40:11.423927   22327 start.go:364] duration metric: took 20.889µs to acquireMachinesLock for "ha-113226"
	I0421 18:40:11.423947   22327 start.go:93] Provisioning new machine with config: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-113226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:40:11.424007   22327 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 18:40:11.425533   22327 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 18:40:11.425658   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:40:11.425700   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:40:11.439802   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I0421 18:40:11.440237   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:40:11.440820   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:40:11.440843   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:40:11.441206   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:40:11.441387   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:11.441534   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:11.441739   22327 start.go:159] libmachine.API.Create for "ha-113226" (driver="kvm2")
	I0421 18:40:11.441771   22327 client.go:168] LocalClient.Create starting
	I0421 18:40:11.441800   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:40:11.441836   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:40:11.441853   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:40:11.441903   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:40:11.441924   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:40:11.441936   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:40:11.441952   22327 main.go:141] libmachine: Running pre-create checks...
	I0421 18:40:11.441962   22327 main.go:141] libmachine: (ha-113226) Calling .PreCreateCheck
	I0421 18:40:11.442321   22327 main.go:141] libmachine: (ha-113226) Calling .GetConfigRaw
	I0421 18:40:11.442715   22327 main.go:141] libmachine: Creating machine...
	I0421 18:40:11.442730   22327 main.go:141] libmachine: (ha-113226) Calling .Create
	I0421 18:40:11.442851   22327 main.go:141] libmachine: (ha-113226) Creating KVM machine...
	I0421 18:40:11.443954   22327 main.go:141] libmachine: (ha-113226) DBG | found existing default KVM network
	I0421 18:40:11.444608   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.444443   22350 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0421 18:40:11.444634   22327 main.go:141] libmachine: (ha-113226) DBG | created network xml: 
	I0421 18:40:11.444652   22327 main.go:141] libmachine: (ha-113226) DBG | <network>
	I0421 18:40:11.444667   22327 main.go:141] libmachine: (ha-113226) DBG |   <name>mk-ha-113226</name>
	I0421 18:40:11.444680   22327 main.go:141] libmachine: (ha-113226) DBG |   <dns enable='no'/>
	I0421 18:40:11.444688   22327 main.go:141] libmachine: (ha-113226) DBG |   
	I0421 18:40:11.444695   22327 main.go:141] libmachine: (ha-113226) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0421 18:40:11.444702   22327 main.go:141] libmachine: (ha-113226) DBG |     <dhcp>
	I0421 18:40:11.444708   22327 main.go:141] libmachine: (ha-113226) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0421 18:40:11.444716   22327 main.go:141] libmachine: (ha-113226) DBG |     </dhcp>
	I0421 18:40:11.444728   22327 main.go:141] libmachine: (ha-113226) DBG |   </ip>
	I0421 18:40:11.444735   22327 main.go:141] libmachine: (ha-113226) DBG |   
	I0421 18:40:11.444740   22327 main.go:141] libmachine: (ha-113226) DBG | </network>
	I0421 18:40:11.444743   22327 main.go:141] libmachine: (ha-113226) DBG | 
	I0421 18:40:11.449847   22327 main.go:141] libmachine: (ha-113226) DBG | trying to create private KVM network mk-ha-113226 192.168.39.0/24...
	I0421 18:40:11.515066   22327 main.go:141] libmachine: (ha-113226) DBG | private KVM network mk-ha-113226 192.168.39.0/24 created
	I0421 18:40:11.515127   22327 main.go:141] libmachine: (ha-113226) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226 ...
	I0421 18:40:11.515158   22327 main.go:141] libmachine: (ha-113226) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:40:11.515171   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.515046   22350 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:40:11.515229   22327 main.go:141] libmachine: (ha-113226) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:40:11.742006   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.741846   22350 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa...
	I0421 18:40:11.783726   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.783582   22350 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/ha-113226.rawdisk...
	I0421 18:40:11.783761   22327 main.go:141] libmachine: (ha-113226) DBG | Writing magic tar header
	I0421 18:40:11.783772   22327 main.go:141] libmachine: (ha-113226) DBG | Writing SSH key tar header
	I0421 18:40:11.783788   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:11.783694   22350 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226 ...
	I0421 18:40:11.783821   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226 (perms=drwx------)
	I0421 18:40:11.783843   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226
	I0421 18:40:11.783856   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:40:11.783878   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:40:11.783899   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:40:11.783908   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:40:11.783915   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:40:11.783938   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:40:11.783947   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:40:11.783957   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:40:11.783967   22327 main.go:141] libmachine: (ha-113226) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:40:11.783980   22327 main.go:141] libmachine: (ha-113226) Creating domain...
	I0421 18:40:11.783990   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:40:11.783994   22327 main.go:141] libmachine: (ha-113226) DBG | Checking permissions on dir: /home
	I0421 18:40:11.784000   22327 main.go:141] libmachine: (ha-113226) DBG | Skipping /home - not owner
	I0421 18:40:11.784997   22327 main.go:141] libmachine: (ha-113226) define libvirt domain using xml: 
	I0421 18:40:11.785031   22327 main.go:141] libmachine: (ha-113226) <domain type='kvm'>
	I0421 18:40:11.785041   22327 main.go:141] libmachine: (ha-113226)   <name>ha-113226</name>
	I0421 18:40:11.785054   22327 main.go:141] libmachine: (ha-113226)   <memory unit='MiB'>2200</memory>
	I0421 18:40:11.785070   22327 main.go:141] libmachine: (ha-113226)   <vcpu>2</vcpu>
	I0421 18:40:11.785081   22327 main.go:141] libmachine: (ha-113226)   <features>
	I0421 18:40:11.785095   22327 main.go:141] libmachine: (ha-113226)     <acpi/>
	I0421 18:40:11.785106   22327 main.go:141] libmachine: (ha-113226)     <apic/>
	I0421 18:40:11.785129   22327 main.go:141] libmachine: (ha-113226)     <pae/>
	I0421 18:40:11.785160   22327 main.go:141] libmachine: (ha-113226)     
	I0421 18:40:11.785176   22327 main.go:141] libmachine: (ha-113226)   </features>
	I0421 18:40:11.785187   22327 main.go:141] libmachine: (ha-113226)   <cpu mode='host-passthrough'>
	I0421 18:40:11.785199   22327 main.go:141] libmachine: (ha-113226)   
	I0421 18:40:11.785211   22327 main.go:141] libmachine: (ha-113226)   </cpu>
	I0421 18:40:11.785223   22327 main.go:141] libmachine: (ha-113226)   <os>
	I0421 18:40:11.785239   22327 main.go:141] libmachine: (ha-113226)     <type>hvm</type>
	I0421 18:40:11.785253   22327 main.go:141] libmachine: (ha-113226)     <boot dev='cdrom'/>
	I0421 18:40:11.785262   22327 main.go:141] libmachine: (ha-113226)     <boot dev='hd'/>
	I0421 18:40:11.785276   22327 main.go:141] libmachine: (ha-113226)     <bootmenu enable='no'/>
	I0421 18:40:11.785287   22327 main.go:141] libmachine: (ha-113226)   </os>
	I0421 18:40:11.785301   22327 main.go:141] libmachine: (ha-113226)   <devices>
	I0421 18:40:11.785321   22327 main.go:141] libmachine: (ha-113226)     <disk type='file' device='cdrom'>
	I0421 18:40:11.785339   22327 main.go:141] libmachine: (ha-113226)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/boot2docker.iso'/>
	I0421 18:40:11.785352   22327 main.go:141] libmachine: (ha-113226)       <target dev='hdc' bus='scsi'/>
	I0421 18:40:11.785365   22327 main.go:141] libmachine: (ha-113226)       <readonly/>
	I0421 18:40:11.785373   22327 main.go:141] libmachine: (ha-113226)     </disk>
	I0421 18:40:11.785408   22327 main.go:141] libmachine: (ha-113226)     <disk type='file' device='disk'>
	I0421 18:40:11.785437   22327 main.go:141] libmachine: (ha-113226)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:40:11.785463   22327 main.go:141] libmachine: (ha-113226)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/ha-113226.rawdisk'/>
	I0421 18:40:11.785480   22327 main.go:141] libmachine: (ha-113226)       <target dev='hda' bus='virtio'/>
	I0421 18:40:11.785496   22327 main.go:141] libmachine: (ha-113226)     </disk>
	I0421 18:40:11.785516   22327 main.go:141] libmachine: (ha-113226)     <interface type='network'>
	I0421 18:40:11.785532   22327 main.go:141] libmachine: (ha-113226)       <source network='mk-ha-113226'/>
	I0421 18:40:11.785545   22327 main.go:141] libmachine: (ha-113226)       <model type='virtio'/>
	I0421 18:40:11.785557   22327 main.go:141] libmachine: (ha-113226)     </interface>
	I0421 18:40:11.785569   22327 main.go:141] libmachine: (ha-113226)     <interface type='network'>
	I0421 18:40:11.785583   22327 main.go:141] libmachine: (ha-113226)       <source network='default'/>
	I0421 18:40:11.785591   22327 main.go:141] libmachine: (ha-113226)       <model type='virtio'/>
	I0421 18:40:11.785604   22327 main.go:141] libmachine: (ha-113226)     </interface>
	I0421 18:40:11.785615   22327 main.go:141] libmachine: (ha-113226)     <serial type='pty'>
	I0421 18:40:11.785628   22327 main.go:141] libmachine: (ha-113226)       <target port='0'/>
	I0421 18:40:11.785640   22327 main.go:141] libmachine: (ha-113226)     </serial>
	I0421 18:40:11.785655   22327 main.go:141] libmachine: (ha-113226)     <console type='pty'>
	I0421 18:40:11.785670   22327 main.go:141] libmachine: (ha-113226)       <target type='serial' port='0'/>
	I0421 18:40:11.785700   22327 main.go:141] libmachine: (ha-113226)     </console>
	I0421 18:40:11.785711   22327 main.go:141] libmachine: (ha-113226)     <rng model='virtio'>
	I0421 18:40:11.785721   22327 main.go:141] libmachine: (ha-113226)       <backend model='random'>/dev/random</backend>
	I0421 18:40:11.785731   22327 main.go:141] libmachine: (ha-113226)     </rng>
	I0421 18:40:11.785740   22327 main.go:141] libmachine: (ha-113226)     
	I0421 18:40:11.785751   22327 main.go:141] libmachine: (ha-113226)     
	I0421 18:40:11.785761   22327 main.go:141] libmachine: (ha-113226)   </devices>
	I0421 18:40:11.785771   22327 main.go:141] libmachine: (ha-113226) </domain>
	I0421 18:40:11.785782   22327 main.go:141] libmachine: (ha-113226) 
	I0421 18:40:11.790191   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:b2:e7:b7 in network default
	I0421 18:40:11.790759   22327 main.go:141] libmachine: (ha-113226) Ensuring networks are active...
	I0421 18:40:11.790775   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:11.791527   22327 main.go:141] libmachine: (ha-113226) Ensuring network default is active
	I0421 18:40:11.791904   22327 main.go:141] libmachine: (ha-113226) Ensuring network mk-ha-113226 is active
	I0421 18:40:11.792401   22327 main.go:141] libmachine: (ha-113226) Getting domain xml...
	I0421 18:40:11.793172   22327 main.go:141] libmachine: (ha-113226) Creating domain...
	I0421 18:40:12.949988   22327 main.go:141] libmachine: (ha-113226) Waiting to get IP...
	I0421 18:40:12.950927   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:12.951330   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:12.951385   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:12.951324   22350 retry.go:31] will retry after 257.738769ms: waiting for machine to come up
	I0421 18:40:13.210794   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:13.211372   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:13.211397   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:13.211345   22350 retry.go:31] will retry after 336.916795ms: waiting for machine to come up
	I0421 18:40:13.549746   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:13.550237   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:13.550264   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:13.550201   22350 retry.go:31] will retry after 322.471756ms: waiting for machine to come up
	I0421 18:40:13.874629   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:13.874924   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:13.874949   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:13.874888   22350 retry.go:31] will retry after 550.724254ms: waiting for machine to come up
	I0421 18:40:14.427502   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:14.427860   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:14.427888   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:14.427830   22350 retry.go:31] will retry after 539.109512ms: waiting for machine to come up
	I0421 18:40:14.968465   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:14.968850   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:14.968878   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:14.968802   22350 retry.go:31] will retry after 902.697901ms: waiting for machine to come up
	I0421 18:40:15.872823   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:15.873140   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:15.873165   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:15.873103   22350 retry.go:31] will retry after 1.015120461s: waiting for machine to come up
	I0421 18:40:16.889857   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:16.890283   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:16.890349   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:16.890220   22350 retry.go:31] will retry after 915.582708ms: waiting for machine to come up
	I0421 18:40:17.807314   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:17.807737   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:17.807767   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:17.807692   22350 retry.go:31] will retry after 1.649437086s: waiting for machine to come up
	I0421 18:40:19.459400   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:19.459862   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:19.459903   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:19.459840   22350 retry.go:31] will retry after 1.425571352s: waiting for machine to come up
	I0421 18:40:20.887632   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:20.888135   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:20.888163   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:20.888078   22350 retry.go:31] will retry after 2.416069759s: waiting for machine to come up
	I0421 18:40:23.306941   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:23.307438   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:23.307467   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:23.307379   22350 retry.go:31] will retry after 3.062699154s: waiting for machine to come up
	I0421 18:40:26.373602   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:26.374091   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:26.374119   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:26.374026   22350 retry.go:31] will retry after 2.866180298s: waiting for machine to come up
	I0421 18:40:29.243335   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:29.243653   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find current IP address of domain ha-113226 in network mk-ha-113226
	I0421 18:40:29.243673   22327 main.go:141] libmachine: (ha-113226) DBG | I0421 18:40:29.243627   22350 retry.go:31] will retry after 4.19991653s: waiting for machine to come up
	I0421 18:40:33.445893   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.446318   22327 main.go:141] libmachine: (ha-113226) Found IP for machine: 192.168.39.60
	I0421 18:40:33.446339   22327 main.go:141] libmachine: (ha-113226) Reserving static IP address...
	I0421 18:40:33.446352   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has current primary IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.446743   22327 main.go:141] libmachine: (ha-113226) DBG | unable to find host DHCP lease matching {name: "ha-113226", mac: "52:54:00:3d:6a:b5", ip: "192.168.39.60"} in network mk-ha-113226
	I0421 18:40:33.518856   22327 main.go:141] libmachine: (ha-113226) Reserved static IP address: 192.168.39.60
	I0421 18:40:33.518886   22327 main.go:141] libmachine: (ha-113226) Waiting for SSH to be available...
	I0421 18:40:33.518896   22327 main.go:141] libmachine: (ha-113226) DBG | Getting to WaitForSSH function...
	I0421 18:40:33.521267   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.521649   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.521673   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.521807   22327 main.go:141] libmachine: (ha-113226) DBG | Using SSH client type: external
	I0421 18:40:33.521838   22327 main.go:141] libmachine: (ha-113226) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa (-rw-------)
	I0421 18:40:33.521881   22327 main.go:141] libmachine: (ha-113226) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:40:33.521895   22327 main.go:141] libmachine: (ha-113226) DBG | About to run SSH command:
	I0421 18:40:33.521910   22327 main.go:141] libmachine: (ha-113226) DBG | exit 0
	I0421 18:40:33.646269   22327 main.go:141] libmachine: (ha-113226) DBG | SSH cmd err, output: <nil>: 
	I0421 18:40:33.646507   22327 main.go:141] libmachine: (ha-113226) KVM machine creation complete!
	I0421 18:40:33.646891   22327 main.go:141] libmachine: (ha-113226) Calling .GetConfigRaw
	I0421 18:40:33.647436   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:33.647636   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:33.647815   22327 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:40:33.647830   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:40:33.649157   22327 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:40:33.649170   22327 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:40:33.649188   22327 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:40:33.649194   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.651550   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.651994   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.652032   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.652100   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.652297   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.652451   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.652614   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.652815   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.653005   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.653017   22327 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:40:33.757953   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:40:33.757979   22327 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:40:33.757990   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.760834   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.761177   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.761209   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.761318   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.761507   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.761747   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.761901   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.762083   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.762248   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.762260   22327 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:40:33.867828   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:40:33.867919   22327 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:40:33.867931   22327 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:40:33.867938   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:33.868182   22327 buildroot.go:166] provisioning hostname "ha-113226"
	I0421 18:40:33.868203   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:33.868377   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.871038   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.871474   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.871506   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.871641   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.871883   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.872039   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.872176   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.872396   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.872590   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.872606   22327 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226 && echo "ha-113226" | sudo tee /etc/hostname
	I0421 18:40:33.995180   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226
	
	I0421 18:40:33.995211   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:33.998164   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.998531   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:33.998558   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:33.998803   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:33.999019   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.999196   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:33.999322   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:33.999479   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:33.999655   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:33.999670   22327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:40:34.112364   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:40:34.112397   22327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:40:34.112421   22327 buildroot.go:174] setting up certificates
	I0421 18:40:34.112433   22327 provision.go:84] configureAuth start
	I0421 18:40:34.112444   22327 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:40:34.112719   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:34.115630   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.116089   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.116116   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.116265   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.118481   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.118840   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.118888   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.118977   22327 provision.go:143] copyHostCerts
	I0421 18:40:34.119021   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:40:34.119052   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:40:34.119061   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:40:34.119135   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:40:34.119256   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:40:34.119283   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:40:34.119293   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:40:34.119330   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:40:34.119438   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:40:34.119473   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:40:34.119482   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:40:34.119517   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:40:34.119595   22327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226 san=[127.0.0.1 192.168.39.60 ha-113226 localhost minikube]
	I0421 18:40:34.256665   22327 provision.go:177] copyRemoteCerts
	I0421 18:40:34.256715   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:40:34.256734   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.259197   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.259480   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.259508   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.259721   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.259926   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.260066   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.260208   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:34.346033   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:40:34.346120   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:40:34.373930   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:40:34.374008   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0421 18:40:34.401211   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:40:34.401283   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 18:40:34.427360   22327 provision.go:87] duration metric: took 314.915519ms to configureAuth
	I0421 18:40:34.427382   22327 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:40:34.427550   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:40:34.427619   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.430611   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.430952   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.430975   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.431182   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.431378   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.431566   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.431715   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.431887   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:34.432083   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:34.432112   22327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:40:34.709099   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:40:34.709122   22327 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:40:34.709129   22327 main.go:141] libmachine: (ha-113226) Calling .GetURL
	I0421 18:40:34.710361   22327 main.go:141] libmachine: (ha-113226) DBG | Using libvirt version 6000000
	I0421 18:40:34.712785   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.713172   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.713201   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.713361   22327 main.go:141] libmachine: Docker is up and running!
	I0421 18:40:34.713377   22327 main.go:141] libmachine: Reticulating splines...
	I0421 18:40:34.713385   22327 client.go:171] duration metric: took 23.27160744s to LocalClient.Create
	I0421 18:40:34.713412   22327 start.go:167] duration metric: took 23.271674332s to libmachine.API.Create "ha-113226"
	I0421 18:40:34.713424   22327 start.go:293] postStartSetup for "ha-113226" (driver="kvm2")
	I0421 18:40:34.713453   22327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:40:34.713474   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.713712   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:40:34.713735   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.715743   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.716071   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.716099   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.716181   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.716359   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.716509   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.716666   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:34.802479   22327 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:40:34.807173   22327 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:40:34.807199   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:40:34.807274   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:40:34.807366   22327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:40:34.807385   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:40:34.807493   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:40:34.818781   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:40:34.846359   22327 start.go:296] duration metric: took 132.921107ms for postStartSetup
	I0421 18:40:34.846414   22327 main.go:141] libmachine: (ha-113226) Calling .GetConfigRaw
	I0421 18:40:34.847069   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:34.849880   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.850251   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.850292   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.850485   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:40:34.850648   22327 start.go:128] duration metric: took 23.426630557s to createHost
	I0421 18:40:34.850667   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.852770   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.853063   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.853087   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.853230   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.853402   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.853574   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.853687   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.853846   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:40:34.854001   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:40:34.854018   22327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:40:34.959823   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713724834.928312409
	
	I0421 18:40:34.959848   22327 fix.go:216] guest clock: 1713724834.928312409
	I0421 18:40:34.959857   22327 fix.go:229] Guest: 2024-04-21 18:40:34.928312409 +0000 UTC Remote: 2024-04-21 18:40:34.850658084 +0000 UTC m=+23.547812524 (delta=77.654325ms)
	I0421 18:40:34.959877   22327 fix.go:200] guest clock delta is within tolerance: 77.654325ms
	I0421 18:40:34.959882   22327 start.go:83] releasing machines lock for "ha-113226", held for 23.53594762s
	I0421 18:40:34.959901   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.960163   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:34.962613   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.963001   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.963035   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.963216   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.963693   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.963860   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:40:34.963948   22327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:40:34.963984   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.964040   22327 ssh_runner.go:195] Run: cat /version.json
	I0421 18:40:34.964075   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:40:34.966434   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.966751   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.966777   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.966796   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.966910   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.967085   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.967228   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.968009   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:34.968663   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:34.968692   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:34.968900   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:40:34.969075   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:40:34.969208   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:40:34.969383   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:40:35.047854   22327 ssh_runner.go:195] Run: systemctl --version
	I0421 18:40:35.070662   22327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:40:35.237644   22327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:40:35.244231   22327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:40:35.244315   22327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:40:35.263802   22327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:40:35.263822   22327 start.go:494] detecting cgroup driver to use...
	I0421 18:40:35.263887   22327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:40:35.281936   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:40:35.296300   22327 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:40:35.296369   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:40:35.310821   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:40:35.325114   22327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:40:35.441304   22327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:40:35.603777   22327 docker.go:233] disabling docker service ...
	I0421 18:40:35.603839   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:40:35.620496   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:40:35.635558   22327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:40:35.755775   22327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:40:35.879362   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:40:35.896068   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:40:35.917780   22327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:40:35.917833   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.930533   22327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:40:35.930592   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.948481   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.960461   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.972842   22327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:40:35.985323   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:35.997730   22327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:36.017090   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:40:36.029406   22327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:40:36.040683   22327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:40:36.040750   22327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:40:36.056550   22327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:40:36.067473   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:40:36.191966   22327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:40:36.340108   22327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:40:36.340175   22327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:40:36.345207   22327 start.go:562] Will wait 60s for crictl version
	I0421 18:40:36.345251   22327 ssh_runner.go:195] Run: which crictl
	I0421 18:40:36.349655   22327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:40:36.392904   22327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:40:36.392988   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:40:36.426280   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:40:36.459781   22327 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:40:36.461153   22327 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:40:36.463537   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:36.463894   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:40:36.463918   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:40:36.464086   22327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:40:36.468766   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:40:36.483564   22327 kubeadm.go:877] updating cluster {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:40:36.483668   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:40:36.483725   22327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:40:36.519121   22327 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 18:40:36.519178   22327 ssh_runner.go:195] Run: which lz4
	I0421 18:40:36.523385   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0421 18:40:36.523488   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 18:40:36.527983   22327 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 18:40:36.528012   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 18:40:38.164489   22327 crio.go:462] duration metric: took 1.641039281s to copy over tarball
	I0421 18:40:38.164556   22327 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 18:40:40.683506   22327 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.518924049s)
	I0421 18:40:40.683530   22327 crio.go:469] duration metric: took 2.519017711s to extract the tarball
	I0421 18:40:40.683537   22327 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 18:40:40.723140   22327 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:40:40.770654   22327 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:40:40.770677   22327 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:40:40.770685   22327 kubeadm.go:928] updating node { 192.168.39.60 8443 v1.30.0 crio true true} ...
	I0421 18:40:40.770798   22327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:40:40.770868   22327 ssh_runner.go:195] Run: crio config
	I0421 18:40:40.816746   22327 cni.go:84] Creating CNI manager for ""
	I0421 18:40:40.816768   22327 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 18:40:40.816781   22327 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:40:40.816815   22327 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-113226 NodeName:ha-113226 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:40:40.816983   22327 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-113226"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:40:40.817009   22327 kube-vip.go:111] generating kube-vip config ...
	I0421 18:40:40.817063   22327 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:40:40.837871   22327 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:40:40.837989   22327 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:40:40.838043   22327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:40:40.849398   22327 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:40:40.849449   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0421 18:40:40.860358   22327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0421 18:40:40.879454   22327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:40:40.898164   22327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0421 18:40:40.916645   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0421 18:40:40.935772   22327 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:40:40.940419   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:40:40.954779   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:40:41.095630   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:40:41.115505   22327 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.60
	I0421 18:40:41.115530   22327 certs.go:194] generating shared ca certs ...
	I0421 18:40:41.115553   22327 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.115730   22327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:40:41.115791   22327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:40:41.115806   22327 certs.go:256] generating profile certs ...
	I0421 18:40:41.115871   22327 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:40:41.115890   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt with IP's: []
	I0421 18:40:41.337876   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt ...
	I0421 18:40:41.337910   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt: {Name:mk07cf03864a7605e553f54f506054e82d530dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.338086   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key ...
	I0421 18:40:41.338102   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key: {Name:mk51046988dfae73dafd5e2bb52db757d2195cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.338190   22327 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d
	I0421 18:40:41.338205   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.254]
	I0421 18:40:41.589025   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d ...
	I0421 18:40:41.589052   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d: {Name:mk407e3447bdc028cf5399a781093ec5b8197618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.589201   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d ...
	I0421 18:40:41.589213   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d: {Name:mk1ad33bf18c891f5bde4dd54410f94c60feaea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.589280   22327 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7ae1430d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:40:41.589353   22327 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7ae1430d -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:40:41.589407   22327 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:40:41.589421   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt with IP's: []
	I0421 18:40:41.688207   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt ...
	I0421 18:40:41.688237   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt: {Name:mk383a6d0d511a7d91ac43bbafb15d715b1c50e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.688398   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key ...
	I0421 18:40:41.688411   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key: {Name:mkcbdf233bd19e5502b42d9eb3ef410542c029bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:40:41.688496   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:40:41.688513   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:40:41.688523   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:40:41.688536   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:40:41.688546   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:40:41.688559   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:40:41.688572   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:40:41.688584   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:40:41.688629   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:40:41.688670   22327 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:40:41.688679   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:40:41.688703   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:40:41.688730   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:40:41.688754   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:40:41.688793   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:40:41.688817   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:40:41.688830   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:41.688847   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:40:41.689399   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:40:41.727238   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:40:41.758482   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:40:41.788689   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:40:41.820960   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 18:40:41.849361   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:40:41.880977   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:40:41.920407   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:40:41.958263   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:40:41.985860   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:40:42.015768   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:40:42.042644   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:40:42.061470   22327 ssh_runner.go:195] Run: openssl version
	I0421 18:40:42.068277   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:40:42.082172   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:40:42.087312   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:40:42.087355   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:40:42.093988   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:40:42.108162   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:40:42.122565   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:42.128007   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:42.128050   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:40:42.134713   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:40:42.149732   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:40:42.162537   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:40:42.167772   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:40:42.167840   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:40:42.174259   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:40:42.186991   22327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:40:42.192079   22327 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:40:42.192129   22327 kubeadm.go:391] StartCluster: {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:40:42.192226   22327 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 18:40:42.192291   22327 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 18:40:42.243493   22327 cri.go:89] found id: ""
	I0421 18:40:42.243561   22327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 18:40:42.256888   22327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 18:40:42.269446   22327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 18:40:42.282243   22327 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 18:40:42.282270   22327 kubeadm.go:156] found existing configuration files:
	
	I0421 18:40:42.282315   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 18:40:42.293790   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 18:40:42.293859   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 18:40:42.305322   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 18:40:42.316693   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 18:40:42.316759   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 18:40:42.330988   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 18:40:42.347215   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 18:40:42.347282   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 18:40:42.358764   22327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 18:40:42.369364   22327 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 18:40:42.369411   22327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 18:40:42.379858   22327 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 18:40:42.487728   22327 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 18:40:42.487787   22327 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 18:40:42.622420   22327 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 18:40:42.622579   22327 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 18:40:42.622724   22327 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 18:40:42.882186   22327 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 18:40:43.082365   22327 out.go:204]   - Generating certificates and keys ...
	I0421 18:40:43.082504   22327 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 18:40:43.082582   22327 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 18:40:43.082659   22327 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 18:40:43.169123   22327 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 18:40:43.301953   22327 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 18:40:43.522237   22327 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 18:40:43.699612   22327 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 18:40:43.699764   22327 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-113226 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0421 18:40:43.835634   22327 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 18:40:43.835906   22327 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-113226 localhost] and IPs [192.168.39.60 127.0.0.1 ::1]
	I0421 18:40:44.083423   22327 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 18:40:44.550387   22327 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 18:40:44.617550   22327 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 18:40:44.618359   22327 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 18:40:44.849445   22327 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 18:40:44.989893   22327 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 18:40:45.168919   22327 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 18:40:45.273209   22327 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 18:40:45.340972   22327 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 18:40:45.341671   22327 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 18:40:45.345091   22327 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 18:40:45.347040   22327 out.go:204]   - Booting up control plane ...
	I0421 18:40:45.347156   22327 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 18:40:45.347244   22327 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 18:40:45.348109   22327 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 18:40:45.369732   22327 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 18:40:45.370680   22327 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 18:40:45.370728   22327 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 18:40:45.503998   22327 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 18:40:45.504097   22327 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 18:40:46.004989   22327 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.400498ms
	I0421 18:40:46.005114   22327 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 18:40:55.130632   22327 kubeadm.go:309] [api-check] The API server is healthy after 9.129088309s
	I0421 18:40:55.142896   22327 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 18:40:55.157751   22327 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 18:40:55.193655   22327 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 18:40:55.193916   22327 kubeadm.go:309] [mark-control-plane] Marking the node ha-113226 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 18:40:55.205791   22327 kubeadm.go:309] [bootstrap-token] Using token: or0ghb.3tvn35rv8gqgy7dn
	I0421 18:40:55.207314   22327 out.go:204]   - Configuring RBAC rules ...
	I0421 18:40:55.207419   22327 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 18:40:55.218747   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 18:40:55.226602   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 18:40:55.232186   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 18:40:55.236480   22327 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 18:40:55.240116   22327 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 18:40:55.537481   22327 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 18:40:55.979802   22327 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 18:40:56.537014   22327 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 18:40:56.538339   22327 kubeadm.go:309] 
	I0421 18:40:56.538393   22327 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 18:40:56.538398   22327 kubeadm.go:309] 
	I0421 18:40:56.538467   22327 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 18:40:56.538474   22327 kubeadm.go:309] 
	I0421 18:40:56.538522   22327 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 18:40:56.538593   22327 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 18:40:56.538671   22327 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 18:40:56.538714   22327 kubeadm.go:309] 
	I0421 18:40:56.538796   22327 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 18:40:56.538806   22327 kubeadm.go:309] 
	I0421 18:40:56.538872   22327 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 18:40:56.538881   22327 kubeadm.go:309] 
	I0421 18:40:56.538945   22327 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 18:40:56.539033   22327 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 18:40:56.539115   22327 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 18:40:56.539125   22327 kubeadm.go:309] 
	I0421 18:40:56.539224   22327 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 18:40:56.539310   22327 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 18:40:56.539322   22327 kubeadm.go:309] 
	I0421 18:40:56.539438   22327 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token or0ghb.3tvn35rv8gqgy7dn \
	I0421 18:40:56.539552   22327 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 18:40:56.539573   22327 kubeadm.go:309] 	--control-plane 
	I0421 18:40:56.539577   22327 kubeadm.go:309] 
	I0421 18:40:56.539693   22327 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 18:40:56.539710   22327 kubeadm.go:309] 
	I0421 18:40:56.539822   22327 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token or0ghb.3tvn35rv8gqgy7dn \
	I0421 18:40:56.539984   22327 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 18:40:56.540606   22327 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 18:40:56.540748   22327 cni.go:84] Creating CNI manager for ""
	I0421 18:40:56.540766   22327 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 18:40:56.542657   22327 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0421 18:40:56.544041   22327 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 18:40:56.551639   22327 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 18:40:56.551659   22327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0421 18:40:56.573515   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 18:40:56.929673   22327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 18:40:56.929752   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:56.929796   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-113226 minikube.k8s.io/updated_at=2024_04_21T18_40_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-113226 minikube.k8s.io/primary=true
	I0421 18:40:57.114007   22327 ops.go:34] apiserver oom_adj: -16
	I0421 18:40:57.114073   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:57.615055   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:58.114425   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:58.615043   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:59.114992   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:40:59.614237   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:00.114769   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:00.614210   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:01.115103   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:01.615035   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:02.115062   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:02.614249   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:03.114731   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:03.614975   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:04.115073   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:04.615064   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:05.114171   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:05.614177   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:06.114959   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:41:06.342460   22327 kubeadm.go:1107] duration metric: took 9.41276071s to wait for elevateKubeSystemPrivileges
	W0421 18:41:06.342505   22327 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 18:41:06.342515   22327 kubeadm.go:393] duration metric: took 24.150389266s to StartCluster
	I0421 18:41:06.342535   22327 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:06.342624   22327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:41:06.343620   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:06.343906   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 18:41:06.343925   22327 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 18:41:06.343997   22327 addons.go:69] Setting storage-provisioner=true in profile "ha-113226"
	I0421 18:41:06.343897   22327 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:41:06.344018   22327 start.go:240] waiting for startup goroutines ...
	I0421 18:41:06.344027   22327 addons.go:234] Setting addon storage-provisioner=true in "ha-113226"
	I0421 18:41:06.344035   22327 addons.go:69] Setting default-storageclass=true in profile "ha-113226"
	I0421 18:41:06.344055   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:41:06.344066   22327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-113226"
	I0421 18:41:06.344545   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:06.345126   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.345187   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.345307   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.345351   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.360730   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0421 18:41:06.361187   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.361613   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.361626   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.361917   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.362476   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.362515   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.365391   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0421 18:41:06.365768   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.366271   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.366298   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.366662   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.366853   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:06.369349   22327 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:41:06.369682   22327 kapi.go:59] client config for ha-113226: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 18:41:06.370211   22327 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 18:41:06.370432   22327 addons.go:234] Setting addon default-storageclass=true in "ha-113226"
	I0421 18:41:06.370476   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:41:06.370851   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.370914   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.377839   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0421 18:41:06.378278   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.378822   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.378856   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.379170   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.379330   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:06.380860   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:41:06.382697   22327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:41:06.384181   22327 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:41:06.384200   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 18:41:06.384218   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:41:06.385525   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0421 18:41:06.385863   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.386397   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.386415   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.386722   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.386904   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.387250   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:41:06.387269   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.387427   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:06.387452   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:06.387512   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:41:06.387634   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:41:06.387780   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:41:06.387898   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:41:06.407408   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0421 18:41:06.407764   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:06.408281   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:06.408304   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:06.408597   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:06.408839   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:06.410216   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:41:06.410444   22327 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 18:41:06.410461   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 18:41:06.410478   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:41:06.412663   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.413119   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:41:06.413142   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:06.413276   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:41:06.413423   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:41:06.413545   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:41:06.413723   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:41:06.526208   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 18:41:06.557558   22327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:41:06.568257   22327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 18:41:07.241782   22327 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0421 18:41:07.399295   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399317   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399375   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399392   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399587   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.399599   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.399619   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399631   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399732   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.399747   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.399733   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.399758   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.399765   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.399900   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.399929   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.399936   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.400041   22327 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0421 18:41:07.400048   22327 round_trippers.go:469] Request Headers:
	I0421 18:41:07.400058   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:41:07.400064   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:41:07.400111   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.400123   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.400133   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.409982   22327 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 18:41:07.410560   22327 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0421 18:41:07.410575   22327 round_trippers.go:469] Request Headers:
	I0421 18:41:07.410583   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:41:07.410588   22327 round_trippers.go:473]     Content-Type: application/json
	I0421 18:41:07.410591   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:41:07.417063   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:41:07.417250   22327 main.go:141] libmachine: Making call to close driver server
	I0421 18:41:07.417269   22327 main.go:141] libmachine: (ha-113226) Calling .Close
	I0421 18:41:07.417553   22327 main.go:141] libmachine: (ha-113226) DBG | Closing plugin on server side
	I0421 18:41:07.417642   22327 main.go:141] libmachine: Successfully made call to close driver server
	I0421 18:41:07.417659   22327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 18:41:07.419596   22327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0421 18:41:07.420983   22327 addons.go:505] duration metric: took 1.077061038s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0421 18:41:07.421022   22327 start.go:245] waiting for cluster config update ...
	I0421 18:41:07.421037   22327 start.go:254] writing updated cluster config ...
	I0421 18:41:07.422926   22327 out.go:177] 
	I0421 18:41:07.424487   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:07.424586   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:41:07.426306   22327 out.go:177] * Starting "ha-113226-m02" control-plane node in "ha-113226" cluster
	I0421 18:41:07.427509   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:41:07.427536   22327 cache.go:56] Caching tarball of preloaded images
	I0421 18:41:07.427641   22327 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:41:07.427655   22327 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:41:07.427754   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:41:07.427966   22327 start.go:360] acquireMachinesLock for ha-113226-m02: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:41:07.428021   22327 start.go:364] duration metric: took 29µs to acquireMachinesLock for "ha-113226-m02"
	I0421 18:41:07.428046   22327 start.go:93] Provisioning new machine with config: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:41:07.428143   22327 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0421 18:41:07.429960   22327 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 18:41:07.430052   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:07.430093   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:07.444971   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0421 18:41:07.445376   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:07.445783   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:07.445804   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:07.446115   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:07.446274   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:07.446405   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:07.446638   22327 start.go:159] libmachine.API.Create for "ha-113226" (driver="kvm2")
	I0421 18:41:07.446670   22327 client.go:168] LocalClient.Create starting
	I0421 18:41:07.446706   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:41:07.446745   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:41:07.446772   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:41:07.446840   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:41:07.446864   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:41:07.446881   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:41:07.446907   22327 main.go:141] libmachine: Running pre-create checks...
	I0421 18:41:07.446918   22327 main.go:141] libmachine: (ha-113226-m02) Calling .PreCreateCheck
	I0421 18:41:07.447106   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetConfigRaw
	I0421 18:41:07.447479   22327 main.go:141] libmachine: Creating machine...
	I0421 18:41:07.447500   22327 main.go:141] libmachine: (ha-113226-m02) Calling .Create
	I0421 18:41:07.447620   22327 main.go:141] libmachine: (ha-113226-m02) Creating KVM machine...
	I0421 18:41:07.449039   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found existing default KVM network
	I0421 18:41:07.449168   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found existing private KVM network mk-ha-113226
	I0421 18:41:07.449344   22327 main.go:141] libmachine: (ha-113226-m02) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02 ...
	I0421 18:41:07.449372   22327 main.go:141] libmachine: (ha-113226-m02) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:41:07.449388   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:07.449312   22722 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:41:07.449488   22327 main.go:141] libmachine: (ha-113226-m02) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:41:07.677469   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:07.677361   22722 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa...
	I0421 18:41:08.031907   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:08.031742   22722 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/ha-113226-m02.rawdisk...
	I0421 18:41:08.031954   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Writing magic tar header
	I0421 18:41:08.031981   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Writing SSH key tar header
	I0421 18:41:08.032043   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:08.031970   22722 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02 ...
	I0421 18:41:08.032190   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02
	I0421 18:41:08.032428   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:41:08.032455   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:41:08.032470   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02 (perms=drwx------)
	I0421 18:41:08.032484   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:41:08.032498   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:41:08.032507   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:41:08.032521   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Checking permissions on dir: /home
	I0421 18:41:08.032536   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Skipping /home - not owner
	I0421 18:41:08.032547   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:41:08.032565   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:41:08.032579   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:41:08.032598   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:41:08.032612   22327 main.go:141] libmachine: (ha-113226-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:41:08.032626   22327 main.go:141] libmachine: (ha-113226-m02) Creating domain...
	I0421 18:41:08.033576   22327 main.go:141] libmachine: (ha-113226-m02) define libvirt domain using xml: 
	I0421 18:41:08.033594   22327 main.go:141] libmachine: (ha-113226-m02) <domain type='kvm'>
	I0421 18:41:08.033601   22327 main.go:141] libmachine: (ha-113226-m02)   <name>ha-113226-m02</name>
	I0421 18:41:08.033607   22327 main.go:141] libmachine: (ha-113226-m02)   <memory unit='MiB'>2200</memory>
	I0421 18:41:08.033612   22327 main.go:141] libmachine: (ha-113226-m02)   <vcpu>2</vcpu>
	I0421 18:41:08.033617   22327 main.go:141] libmachine: (ha-113226-m02)   <features>
	I0421 18:41:08.033622   22327 main.go:141] libmachine: (ha-113226-m02)     <acpi/>
	I0421 18:41:08.033627   22327 main.go:141] libmachine: (ha-113226-m02)     <apic/>
	I0421 18:41:08.033632   22327 main.go:141] libmachine: (ha-113226-m02)     <pae/>
	I0421 18:41:08.033638   22327 main.go:141] libmachine: (ha-113226-m02)     
	I0421 18:41:08.033642   22327 main.go:141] libmachine: (ha-113226-m02)   </features>
	I0421 18:41:08.033647   22327 main.go:141] libmachine: (ha-113226-m02)   <cpu mode='host-passthrough'>
	I0421 18:41:08.033652   22327 main.go:141] libmachine: (ha-113226-m02)   
	I0421 18:41:08.033663   22327 main.go:141] libmachine: (ha-113226-m02)   </cpu>
	I0421 18:41:08.033669   22327 main.go:141] libmachine: (ha-113226-m02)   <os>
	I0421 18:41:08.033672   22327 main.go:141] libmachine: (ha-113226-m02)     <type>hvm</type>
	I0421 18:41:08.033678   22327 main.go:141] libmachine: (ha-113226-m02)     <boot dev='cdrom'/>
	I0421 18:41:08.033683   22327 main.go:141] libmachine: (ha-113226-m02)     <boot dev='hd'/>
	I0421 18:41:08.033689   22327 main.go:141] libmachine: (ha-113226-m02)     <bootmenu enable='no'/>
	I0421 18:41:08.033694   22327 main.go:141] libmachine: (ha-113226-m02)   </os>
	I0421 18:41:08.033699   22327 main.go:141] libmachine: (ha-113226-m02)   <devices>
	I0421 18:41:08.033707   22327 main.go:141] libmachine: (ha-113226-m02)     <disk type='file' device='cdrom'>
	I0421 18:41:08.033721   22327 main.go:141] libmachine: (ha-113226-m02)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/boot2docker.iso'/>
	I0421 18:41:08.033733   22327 main.go:141] libmachine: (ha-113226-m02)       <target dev='hdc' bus='scsi'/>
	I0421 18:41:08.033754   22327 main.go:141] libmachine: (ha-113226-m02)       <readonly/>
	I0421 18:41:08.033772   22327 main.go:141] libmachine: (ha-113226-m02)     </disk>
	I0421 18:41:08.033783   22327 main.go:141] libmachine: (ha-113226-m02)     <disk type='file' device='disk'>
	I0421 18:41:08.033793   22327 main.go:141] libmachine: (ha-113226-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:41:08.033807   22327 main.go:141] libmachine: (ha-113226-m02)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/ha-113226-m02.rawdisk'/>
	I0421 18:41:08.033814   22327 main.go:141] libmachine: (ha-113226-m02)       <target dev='hda' bus='virtio'/>
	I0421 18:41:08.033820   22327 main.go:141] libmachine: (ha-113226-m02)     </disk>
	I0421 18:41:08.033828   22327 main.go:141] libmachine: (ha-113226-m02)     <interface type='network'>
	I0421 18:41:08.033833   22327 main.go:141] libmachine: (ha-113226-m02)       <source network='mk-ha-113226'/>
	I0421 18:41:08.033838   22327 main.go:141] libmachine: (ha-113226-m02)       <model type='virtio'/>
	I0421 18:41:08.033846   22327 main.go:141] libmachine: (ha-113226-m02)     </interface>
	I0421 18:41:08.033850   22327 main.go:141] libmachine: (ha-113226-m02)     <interface type='network'>
	I0421 18:41:08.033879   22327 main.go:141] libmachine: (ha-113226-m02)       <source network='default'/>
	I0421 18:41:08.033903   22327 main.go:141] libmachine: (ha-113226-m02)       <model type='virtio'/>
	I0421 18:41:08.033917   22327 main.go:141] libmachine: (ha-113226-m02)     </interface>
	I0421 18:41:08.033929   22327 main.go:141] libmachine: (ha-113226-m02)     <serial type='pty'>
	I0421 18:41:08.033942   22327 main.go:141] libmachine: (ha-113226-m02)       <target port='0'/>
	I0421 18:41:08.033953   22327 main.go:141] libmachine: (ha-113226-m02)     </serial>
	I0421 18:41:08.033962   22327 main.go:141] libmachine: (ha-113226-m02)     <console type='pty'>
	I0421 18:41:08.033973   22327 main.go:141] libmachine: (ha-113226-m02)       <target type='serial' port='0'/>
	I0421 18:41:08.033982   22327 main.go:141] libmachine: (ha-113226-m02)     </console>
	I0421 18:41:08.033989   22327 main.go:141] libmachine: (ha-113226-m02)     <rng model='virtio'>
	I0421 18:41:08.034004   22327 main.go:141] libmachine: (ha-113226-m02)       <backend model='random'>/dev/random</backend>
	I0421 18:41:08.034017   22327 main.go:141] libmachine: (ha-113226-m02)     </rng>
	I0421 18:41:08.034024   22327 main.go:141] libmachine: (ha-113226-m02)     
	I0421 18:41:08.034050   22327 main.go:141] libmachine: (ha-113226-m02)     
	I0421 18:41:08.034100   22327 main.go:141] libmachine: (ha-113226-m02)   </devices>
	I0421 18:41:08.034113   22327 main.go:141] libmachine: (ha-113226-m02) </domain>
	I0421 18:41:08.034123   22327 main.go:141] libmachine: (ha-113226-m02) 
	I0421 18:41:08.040923   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:71:77:f4 in network default
	I0421 18:41:08.041467   22327 main.go:141] libmachine: (ha-113226-m02) Ensuring networks are active...
	I0421 18:41:08.041487   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:08.042146   22327 main.go:141] libmachine: (ha-113226-m02) Ensuring network default is active
	I0421 18:41:08.042501   22327 main.go:141] libmachine: (ha-113226-m02) Ensuring network mk-ha-113226 is active
	I0421 18:41:08.042871   22327 main.go:141] libmachine: (ha-113226-m02) Getting domain xml...
	I0421 18:41:08.043522   22327 main.go:141] libmachine: (ha-113226-m02) Creating domain...
	I0421 18:41:09.277030   22327 main.go:141] libmachine: (ha-113226-m02) Waiting to get IP...
	I0421 18:41:09.277872   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:09.278407   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:09.278429   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:09.278383   22722 retry.go:31] will retry after 263.544195ms: waiting for machine to come up
	I0421 18:41:09.544042   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:09.544596   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:09.544623   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:09.544561   22722 retry.go:31] will retry after 314.37187ms: waiting for machine to come up
	I0421 18:41:09.859966   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:09.860460   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:09.860483   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:09.860426   22722 retry.go:31] will retry after 403.379124ms: waiting for machine to come up
	I0421 18:41:10.264830   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:10.265239   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:10.265263   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:10.265211   22722 retry.go:31] will retry after 570.842593ms: waiting for machine to come up
	I0421 18:41:10.837904   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:10.838340   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:10.838363   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:10.838287   22722 retry.go:31] will retry after 563.730901ms: waiting for machine to come up
	I0421 18:41:11.403949   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:11.404374   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:11.404411   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:11.404336   22722 retry.go:31] will retry after 624.074886ms: waiting for machine to come up
	I0421 18:41:12.029954   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:12.030595   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:12.030625   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:12.030548   22722 retry.go:31] will retry after 816.379918ms: waiting for machine to come up
	I0421 18:41:12.848209   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:12.848659   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:12.848688   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:12.848617   22722 retry.go:31] will retry after 1.033034557s: waiting for machine to come up
	I0421 18:41:13.883601   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:13.883983   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:13.884018   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:13.883940   22722 retry.go:31] will retry after 1.604433858s: waiting for machine to come up
	I0421 18:41:15.490700   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:15.491113   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:15.491143   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:15.491065   22722 retry.go:31] will retry after 1.927254199s: waiting for machine to come up
	I0421 18:41:17.419508   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:17.419918   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:17.419950   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:17.419902   22722 retry.go:31] will retry after 2.429342073s: waiting for machine to come up
	I0421 18:41:19.850459   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:19.850904   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:19.850930   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:19.850863   22722 retry.go:31] will retry after 2.535315039s: waiting for machine to come up
	I0421 18:41:22.388249   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:22.388723   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:22.388749   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:22.388682   22722 retry.go:31] will retry after 3.428684679s: waiting for machine to come up
	I0421 18:41:25.819051   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:25.819520   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find current IP address of domain ha-113226-m02 in network mk-ha-113226
	I0421 18:41:25.819547   22327 main.go:141] libmachine: (ha-113226-m02) DBG | I0421 18:41:25.819474   22722 retry.go:31] will retry after 4.932403392s: waiting for machine to come up
	I0421 18:41:30.755560   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.756031   22327 main.go:141] libmachine: (ha-113226-m02) Found IP for machine: 192.168.39.233
	I0421 18:41:30.756057   22327 main.go:141] libmachine: (ha-113226-m02) Reserving static IP address...
	I0421 18:41:30.756069   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has current primary IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.756397   22327 main.go:141] libmachine: (ha-113226-m02) DBG | unable to find host DHCP lease matching {name: "ha-113226-m02", mac: "52:54:00:4f:2c:56", ip: "192.168.39.233"} in network mk-ha-113226
	I0421 18:41:30.828076   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Getting to WaitForSSH function...
	I0421 18:41:30.828110   22327 main.go:141] libmachine: (ha-113226-m02) Reserved static IP address: 192.168.39.233
	I0421 18:41:30.828132   22327 main.go:141] libmachine: (ha-113226-m02) Waiting for SSH to be available...
	I0421 18:41:30.830409   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.830762   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:30.830786   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.830916   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Using SSH client type: external
	I0421 18:41:30.830944   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa (-rw-------)
	I0421 18:41:30.830973   22327 main.go:141] libmachine: (ha-113226-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:41:30.830993   22327 main.go:141] libmachine: (ha-113226-m02) DBG | About to run SSH command:
	I0421 18:41:30.831011   22327 main.go:141] libmachine: (ha-113226-m02) DBG | exit 0
	I0421 18:41:30.954705   22327 main.go:141] libmachine: (ha-113226-m02) DBG | SSH cmd err, output: <nil>: 
	I0421 18:41:30.954934   22327 main.go:141] libmachine: (ha-113226-m02) KVM machine creation complete!
	I0421 18:41:30.955258   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetConfigRaw
	I0421 18:41:30.955748   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:30.955937   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:30.956072   22327 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:41:30.956083   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:41:30.957450   22327 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:41:30.957468   22327 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:41:30.957475   22327 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:41:30.957482   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:30.959523   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.959883   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:30.959911   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:30.960012   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:30.960181   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:30.960368   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:30.960546   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:30.960719   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:30.960918   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:30.960929   22327 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:41:31.062147   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:41:31.062177   22327 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:41:31.062187   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.064786   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.065144   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.065176   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.065288   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.065458   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.065630   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.065764   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.065979   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.066213   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.066228   22327 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:41:31.167215   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:41:31.167308   22327 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:41:31.167324   22327 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:41:31.167335   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:31.167592   22327 buildroot.go:166] provisioning hostname "ha-113226-m02"
	I0421 18:41:31.167624   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:31.167843   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.170564   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.170969   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.171002   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.171200   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.171379   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.171546   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.171694   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.171873   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.172089   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.172108   22327 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226-m02 && echo "ha-113226-m02" | sudo tee /etc/hostname
	I0421 18:41:31.291064   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226-m02
	
	I0421 18:41:31.291121   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.294169   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.294640   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.294672   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.294831   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.295021   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.295188   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.295338   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.295508   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.295669   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.295685   22327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:41:31.404406   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:41:31.404431   22327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:41:31.404444   22327 buildroot.go:174] setting up certificates
	I0421 18:41:31.404452   22327 provision.go:84] configureAuth start
	I0421 18:41:31.404463   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetMachineName
	I0421 18:41:31.404727   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:31.407309   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.407631   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.407650   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.407912   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.410073   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.410371   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.410394   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.410523   22327 provision.go:143] copyHostCerts
	I0421 18:41:31.410547   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:41:31.410573   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:41:31.410582   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:41:31.410641   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:41:31.410712   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:41:31.410732   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:41:31.410736   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:41:31.410759   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:41:31.410800   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:41:31.410816   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:41:31.410822   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:41:31.410841   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:41:31.410886   22327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226-m02 san=[127.0.0.1 192.168.39.233 ha-113226-m02 localhost minikube]
	I0421 18:41:31.532353   22327 provision.go:177] copyRemoteCerts
	I0421 18:41:31.532405   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:41:31.532428   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.534989   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.535344   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.535380   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.535524   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.535690   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.535836   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.535959   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:31.617511   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:41:31.617593   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 18:41:31.645600   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:41:31.645661   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:41:31.671982   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:41:31.672047   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 18:41:31.699144   22327 provision.go:87] duration metric: took 294.678995ms to configureAuth
	I0421 18:41:31.699171   22327 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:41:31.699342   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:31.699431   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.702019   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.702444   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.702470   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.702623   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.702820   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.703023   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.703171   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.703345   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:31.703543   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:31.703558   22327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:41:31.986115   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:41:31.986143   22327 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:41:31.986154   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetURL
	I0421 18:41:31.987310   22327 main.go:141] libmachine: (ha-113226-m02) DBG | Using libvirt version 6000000
	I0421 18:41:31.989434   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.989816   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.989863   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.990014   22327 main.go:141] libmachine: Docker is up and running!
	I0421 18:41:31.990031   22327 main.go:141] libmachine: Reticulating splines...
	I0421 18:41:31.990039   22327 client.go:171] duration metric: took 24.543360917s to LocalClient.Create
	I0421 18:41:31.990078   22327 start.go:167] duration metric: took 24.543441614s to libmachine.API.Create "ha-113226"
	I0421 18:41:31.990093   22327 start.go:293] postStartSetup for "ha-113226-m02" (driver="kvm2")
	I0421 18:41:31.990108   22327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:41:31.990128   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:31.990355   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:41:31.990377   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:31.992571   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.992920   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:31.992946   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:31.993048   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:31.993211   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:31.993348   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:31.993479   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:32.075424   22327 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:41:32.080586   22327 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:41:32.080613   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:41:32.080685   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:41:32.080758   22327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:41:32.080770   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:41:32.080864   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:41:32.091018   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:41:32.119003   22327 start.go:296] duration metric: took 128.894041ms for postStartSetup
	I0421 18:41:32.119052   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetConfigRaw
	I0421 18:41:32.119702   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:32.122281   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.122634   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.122655   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.122936   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:41:32.123151   22327 start.go:128] duration metric: took 24.694989634s to createHost
	I0421 18:41:32.123175   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:32.125395   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.125656   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.125694   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.125820   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:32.125994   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.126140   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.126243   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:32.126388   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:41:32.126534   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0421 18:41:32.126545   22327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:41:32.227071   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713724892.197171216
	
	I0421 18:41:32.227097   22327 fix.go:216] guest clock: 1713724892.197171216
	I0421 18:41:32.227104   22327 fix.go:229] Guest: 2024-04-21 18:41:32.197171216 +0000 UTC Remote: 2024-04-21 18:41:32.123164613 +0000 UTC m=+80.820319053 (delta=74.006603ms)
	I0421 18:41:32.227119   22327 fix.go:200] guest clock delta is within tolerance: 74.006603ms
	I0421 18:41:32.227124   22327 start.go:83] releasing machines lock for "ha-113226-m02", held for 24.799092085s
	I0421 18:41:32.227141   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.227394   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:32.230084   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.230466   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.230492   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.233141   22327 out.go:177] * Found network options:
	I0421 18:41:32.234790   22327 out.go:177]   - NO_PROXY=192.168.39.60
	W0421 18:41:32.236133   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:41:32.236186   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.236815   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.236996   22327 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:41:32.237083   22327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:41:32.237123   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	W0421 18:41:32.237218   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:41:32.237300   22327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:41:32.237325   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:41:32.239834   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240100   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240208   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.240236   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240389   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:32.240499   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:32.240528   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.240527   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:32.240693   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:32.240695   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:41:32.240885   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:32.240902   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:41:32.241035   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:41:32.241137   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:41:32.491933   22327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:41:32.498595   22327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:41:32.498672   22327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:41:32.522547   22327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:41:32.522573   22327 start.go:494] detecting cgroup driver to use...
	I0421 18:41:32.522632   22327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:41:32.547775   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:41:32.563316   22327 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:41:32.563367   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:41:32.578972   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:41:32.593734   22327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:41:32.727976   22327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:41:32.884071   22327 docker.go:233] disabling docker service ...
	I0421 18:41:32.884133   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:41:32.900565   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:41:32.914082   22327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:41:33.062759   22327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:41:33.190485   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:41:33.207746   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:41:33.228289   22327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:41:33.228356   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.241881   22327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:41:33.241949   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.254726   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.266578   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.278457   22327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:41:33.290519   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.302272   22327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.321338   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:41:33.334037   22327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:41:33.345455   22327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:41:33.345503   22327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:41:33.360481   22327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:41:33.372097   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:41:33.488170   22327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:41:33.642971   22327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:41:33.643049   22327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:41:33.648549   22327 start.go:562] Will wait 60s for crictl version
	I0421 18:41:33.648606   22327 ssh_runner.go:195] Run: which crictl
	I0421 18:41:33.653179   22327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:41:33.694505   22327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:41:33.694566   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:41:33.725391   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:41:33.762152   22327 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:41:33.763577   22327 out.go:177]   - env NO_PROXY=192.168.39.60
	I0421 18:41:33.764790   22327 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:41:33.767163   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:33.767600   22327 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:41:23 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:41:33.767633   22327 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:41:33.767797   22327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:41:33.773374   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:41:33.788650   22327 mustload.go:65] Loading cluster: ha-113226
	I0421 18:41:33.788874   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:41:33.789129   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:33.789157   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:33.803420   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0421 18:41:33.803803   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:33.804361   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:33.804381   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:33.804783   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:33.804993   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:41:33.806740   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:41:33.807049   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:41:33.807072   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:41:33.821142   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37687
	I0421 18:41:33.821741   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:41:33.822132   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:41:33.822154   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:41:33.822477   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:41:33.822654   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:41:33.822814   22327 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.233
	I0421 18:41:33.822825   22327 certs.go:194] generating shared ca certs ...
	I0421 18:41:33.822842   22327 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:33.822974   22327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:41:33.823016   22327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:41:33.823025   22327 certs.go:256] generating profile certs ...
	I0421 18:41:33.823095   22327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:41:33.823119   22327 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886
	I0421 18:41:33.823132   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.233 192.168.39.254]
	I0421 18:41:34.029355   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886 ...
	I0421 18:41:34.029382   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886: {Name:mk42199ee0de701846fe5b05e91e06a1c77e212f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:34.029560   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886 ...
	I0421 18:41:34.029577   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886: {Name:mk70f9a427951197d7f02d7d00c32af57a972251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:41:34.029676   22327 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.8d0fe886 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:41:34.029806   22327 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.8d0fe886 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:41:34.029925   22327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:41:34.029941   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:41:34.029952   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:41:34.029962   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:41:34.029975   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:41:34.029985   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:41:34.029996   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:41:34.030007   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:41:34.030019   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:41:34.030089   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:41:34.030126   22327 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:41:34.030135   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:41:34.030156   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:41:34.030176   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:41:34.030202   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:41:34.030246   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:41:34.030273   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.030287   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.030300   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.030328   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:41:34.032917   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:34.033260   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:41:34.033290   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:41:34.033418   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:41:34.033624   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:41:34.033777   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:41:34.033901   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:41:34.110356   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 18:41:34.116257   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 18:41:34.128143   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 18:41:34.133441   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 18:41:34.146570   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 18:41:34.151351   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 18:41:34.163721   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 18:41:34.168800   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 18:41:34.189063   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 18:41:34.193942   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 18:41:34.205764   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 18:41:34.210551   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0421 18:41:34.222843   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:41:34.251304   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:41:34.277971   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:41:34.304333   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:41:34.331123   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0421 18:41:34.356975   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 18:41:34.383415   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:41:34.408381   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:41:34.434315   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:41:34.460926   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:41:34.488035   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:41:34.513110   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 18:41:34.531510   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 18:41:34.550153   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 18:41:34.569055   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 18:41:34.588137   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 18:41:34.608510   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0421 18:41:34.628141   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 18:41:34.645910   22327 ssh_runner.go:195] Run: openssl version
	I0421 18:41:34.652072   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:41:34.664058   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.668808   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.668855   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:41:34.675366   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:41:34.688325   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:41:34.701131   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.706261   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.706309   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:41:34.712415   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:41:34.724121   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:41:34.736978   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.741933   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.741983   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:41:34.747903   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:41:34.760620   22327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:41:34.765641   22327 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:41:34.765735   22327 kubeadm.go:928] updating node {m02 192.168.39.233 8443 v1.30.0 crio true true} ...
	I0421 18:41:34.765892   22327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:41:34.765936   22327 kube-vip.go:111] generating kube-vip config ...
	I0421 18:41:34.765979   22327 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:41:34.786047   22327 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:41:34.786116   22327 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:41:34.786171   22327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:41:34.799352   22327 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 18:41:34.799457   22327 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 18:41:34.811282   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0421 18:41:34.811313   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:41:34.811333   22327 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0421 18:41:34.811375   22327 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0421 18:41:34.811388   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:41:34.816951   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 18:41:34.816974   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 18:42:06.213874   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:42:06.213954   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:42:06.220168   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 18:42:06.220202   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 18:42:34.733837   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:42:34.750974   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:42:34.751083   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:42:34.756492   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 18:42:34.756525   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 18:42:35.225131   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 18:42:35.236371   22327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 18:42:35.256435   22327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:42:35.274746   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 18:42:35.294770   22327 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:42:35.299386   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:42:35.314917   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:42:35.444048   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:42:35.462102   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:42:35.462459   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:42:35.462490   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:42:35.477409   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0421 18:42:35.477847   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:42:35.478377   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:42:35.478404   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:42:35.478731   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:42:35.478934   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:42:35.479125   22327 start.go:316] joinCluster: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:42:35.479245   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 18:42:35.479266   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:42:35.482274   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:42:35.482698   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:42:35.482740   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:42:35.482920   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:42:35.483131   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:42:35.483292   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:42:35.483468   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:42:35.657255   22327 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:42:35.657356   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hv1tgo.edjk7g6dh6kic30b --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m02 --control-plane --apiserver-advertise-address=192.168.39.233 --apiserver-bind-port=8443"
	I0421 18:42:58.476187   22327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hv1tgo.edjk7g6dh6kic30b --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m02 --control-plane --apiserver-advertise-address=192.168.39.233 --apiserver-bind-port=8443": (22.818796491s)
	I0421 18:42:58.476227   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 18:42:59.060505   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-113226-m02 minikube.k8s.io/updated_at=2024_04_21T18_42_59_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-113226 minikube.k8s.io/primary=false
	I0421 18:42:59.200952   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-113226-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 18:42:59.318871   22327 start.go:318] duration metric: took 23.839742493s to joinCluster
	I0421 18:42:59.318957   22327 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:42:59.320426   22327 out.go:177] * Verifying Kubernetes components...
	I0421 18:42:59.319266   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:42:59.321784   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:42:59.577798   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:42:59.662837   22327 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:42:59.663046   22327 kapi.go:59] client config for ha-113226: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 18:42:59.663129   22327 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0421 18:42:59.663334   22327 node_ready.go:35] waiting up to 6m0s for node "ha-113226-m02" to be "Ready" ...
	I0421 18:42:59.663433   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:42:59.663441   22327 round_trippers.go:469] Request Headers:
	I0421 18:42:59.663449   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:42:59.663453   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:42:59.675124   22327 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 18:43:00.163842   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:00.163863   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:00.163871   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:00.163875   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:00.175446   22327 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 18:43:00.663890   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:00.663914   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:00.663923   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:00.663927   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:00.669145   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:01.164255   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:01.164274   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:01.164284   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:01.164290   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:01.168735   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:01.663577   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:01.663603   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:01.663612   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:01.663616   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:01.668364   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:01.669245   22327 node_ready.go:53] node "ha-113226-m02" has status "Ready":"False"
	I0421 18:43:02.163638   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:02.163664   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:02.163671   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:02.163676   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:02.167128   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:02.664253   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:02.664296   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:02.664307   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:02.664314   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:02.667211   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:03.164498   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:03.164518   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:03.164527   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:03.164529   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:03.168450   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:03.663538   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:03.663559   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:03.663567   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:03.663570   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:03.669151   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:03.669919   22327 node_ready.go:53] node "ha-113226-m02" has status "Ready":"False"
	I0421 18:43:04.164139   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:04.164165   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:04.164175   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:04.164182   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:04.169937   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:04.663857   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:04.663879   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:04.663889   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:04.663894   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:04.667722   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:05.163827   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:05.163849   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:05.163857   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:05.163864   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:05.167691   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:05.664121   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:05.664160   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:05.664168   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:05.664171   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:05.667759   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:06.164027   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:06.164051   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:06.164060   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:06.164066   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:06.167829   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:06.168744   22327 node_ready.go:53] node "ha-113226-m02" has status "Ready":"False"
	I0421 18:43:06.663968   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:06.663992   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:06.664000   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:06.664003   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:06.668068   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:07.164218   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:07.164236   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:07.164243   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:07.164246   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:07.167556   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:07.664052   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:07.664080   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:07.664090   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:07.664095   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:07.667546   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:08.163677   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:08.163696   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.163703   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.163707   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.169726   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:08.170739   22327 node_ready.go:49] node "ha-113226-m02" has status "Ready":"True"
	I0421 18:43:08.170756   22327 node_ready.go:38] duration metric: took 8.507391931s for node "ha-113226-m02" to be "Ready" ...
	I0421 18:43:08.170769   22327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:43:08.170850   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:08.170860   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.170867   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.170872   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.175673   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:08.182999   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.183061   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n8sbt
	I0421 18:43:08.183070   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.183077   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.183081   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.185657   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.186391   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:08.186405   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.186412   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.186416   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.192600   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:08.193122   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:08.193138   22327 pod_ready.go:81] duration metric: took 10.120033ms for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.193150   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.193211   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zhskp
	I0421 18:43:08.193222   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.193232   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.193244   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.195933   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.196692   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:08.196706   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.196716   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.196721   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.199041   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.199607   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:08.199626   22327 pod_ready.go:81] duration metric: took 6.468093ms for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.199637   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.199678   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226
	I0421 18:43:08.199685   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.199692   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.199697   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.202002   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.202686   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:08.202699   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.202706   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.202710   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.204929   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.205609   22327 pod_ready.go:92] pod "etcd-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:08.205627   22327 pod_ready.go:81] duration metric: took 5.983588ms for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.205638   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:08.205687   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:08.205698   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.205708   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.205726   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.207996   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.209058   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:08.209073   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.209079   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.209083   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.211914   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:08.705961   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:08.705984   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.705992   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.705996   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.709215   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:08.709932   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:08.709948   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:08.709955   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:08.709958   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:08.713090   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:09.206310   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:09.206331   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.206339   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.206348   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.209837   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:09.210511   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:09.210525   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.210532   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.210537   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.213428   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:09.706298   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:09.706320   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.706328   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.706332   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.709783   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:09.710698   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:09.710714   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:09.710721   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:09.710726   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:09.713439   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:10.206480   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:10.206499   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.206506   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.206510   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.209906   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:10.210574   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:10.210588   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.210594   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.210597   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.213381   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:10.213996   22327 pod_ready.go:102] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"False"
	I0421 18:43:10.706731   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:10.706751   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.706759   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.706763   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.710560   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:10.711506   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:10.711524   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:10.711534   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:10.711540   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:10.714802   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:11.205901   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:11.205925   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.205933   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.205936   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.209465   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:11.210606   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:11.210620   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.210628   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.210631   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.213578   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:11.706583   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:11.706607   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.706615   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.706619   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.709755   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:11.710680   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:11.710695   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:11.710702   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:11.710706   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:11.713436   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:12.206838   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:12.206858   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.206865   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.206870   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.210978   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:12.211786   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:12.211801   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.211808   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.211811   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.214802   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:12.215457   22327 pod_ready.go:102] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"False"
	I0421 18:43:12.706314   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:12.706335   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.706343   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.706348   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.715912   22327 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 18:43:12.716796   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:12.716810   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:12.716817   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:12.716820   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:12.723311   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:13.206422   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:13.206442   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.206450   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.206454   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.210583   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:13.211537   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:13.211552   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.211559   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.211564   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.214596   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:13.706297   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:13.706326   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.706336   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.706341   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.709424   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:13.710074   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:13.710087   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:13.710097   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:13.710103   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:13.713769   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:14.206579   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:14.206607   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.206617   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.206623   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.210189   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:14.210935   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:14.210950   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.210957   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.210964   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.213937   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:14.706103   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:14.706127   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.706136   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.706141   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.711621   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:14.712567   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:14.712586   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:14.712598   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:14.712602   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:14.715648   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:14.716207   22327 pod_ready.go:102] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"False"
	I0421 18:43:15.206720   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:15.206745   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.206753   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.206758   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.210676   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:15.211393   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:15.211409   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.211419   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.211423   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.214588   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:15.705919   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:15.705942   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.705950   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.705954   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.710695   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:15.711698   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:15.711712   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:15.711720   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:15.711724   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:15.715433   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.206082   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:43:16.206101   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.206108   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.206117   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.209694   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.210466   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:16.210483   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.210489   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.210495   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.213284   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.213983   22327 pod_ready.go:92] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.214002   22327 pod_ready.go:81] duration metric: took 8.00835575s for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.214021   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.214151   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226
	I0421 18:43:16.214165   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.214175   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.214186   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.221621   22327 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 18:43:16.222382   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.222401   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.222409   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.222414   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.224694   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.225320   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.225340   22327 pod_ready.go:81] duration metric: took 11.309161ms for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.225352   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.225405   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226-m02
	I0421 18:43:16.225416   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.225426   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.225435   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.228686   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.229568   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:16.229585   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.229593   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.229597   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.232962   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.233488   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.233502   22327 pod_ready.go:81] duration metric: took 8.143635ms for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.233511   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.233553   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226
	I0421 18:43:16.233560   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.233567   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.233572   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.236070   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.236695   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.236709   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.236715   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.236718   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.239550   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.240022   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.240038   22327 pod_ready.go:81] duration metric: took 6.518593ms for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.240046   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.240088   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m02
	I0421 18:43:16.240095   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.240101   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.240104   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.242552   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.243030   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:16.243043   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.243050   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.243053   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.245520   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:43:16.246089   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.246105   22327 pod_ready.go:81] duration metric: took 6.052729ms for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.246113   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.406449   22327 request.go:629] Waited for 160.279782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:43:16.406507   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:43:16.406512   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.406519   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.406524   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.410076   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:16.606365   22327 request.go:629] Waited for 195.467139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.606428   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:16.606434   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.606441   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.606448   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.613365   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:16.614135   22327 pod_ready.go:92] pod "kube-proxy-h75dp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:16.614155   22327 pod_ready.go:81] duration metric: took 368.036366ms for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.614166   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:16.806362   22327 request.go:629] Waited for 192.134324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:43:16.806447   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:43:16.806453   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:16.806460   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:16.806466   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:16.810569   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:17.006920   22327 request.go:629] Waited for 195.417936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.006998   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.007006   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.007016   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.007023   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.010869   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.011538   22327 pod_ready.go:92] pod "kube-proxy-nsv74" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:17.011558   22327 pod_ready.go:81] duration metric: took 397.385262ms for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.011572   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.206608   22327 request.go:629] Waited for 194.957893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:43:17.206691   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:43:17.206699   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.206708   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.206718   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.210673   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.406933   22327 request.go:629] Waited for 195.36392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:17.407049   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:43:17.407059   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.407066   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.407071   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.410934   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.411897   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:17.411918   22327 pod_ready.go:81] duration metric: took 400.337546ms for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.411932   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.606521   22327 request.go:629] Waited for 194.509454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:43:17.606608   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:43:17.606615   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.606625   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.606644   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.611064   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:43:17.807083   22327 request.go:629] Waited for 195.387551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.807142   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:43:17.807158   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.807171   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.807178   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.810959   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:17.811539   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:43:17.811556   22327 pod_ready.go:81] duration metric: took 399.608297ms for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:43:17.811568   22327 pod_ready.go:38] duration metric: took 9.640761216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:43:17.811586   22327 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:43:17.811648   22327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:43:17.830033   22327 api_server.go:72] duration metric: took 18.511038156s to wait for apiserver process to appear ...
	I0421 18:43:17.830054   22327 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:43:17.830094   22327 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0421 18:43:17.836803   22327 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0421 18:43:17.836865   22327 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0421 18:43:17.836872   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:17.836885   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:17.836890   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:17.837764   22327 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0421 18:43:17.837852   22327 api_server.go:141] control plane version: v1.30.0
	I0421 18:43:17.837866   22327 api_server.go:131] duration metric: took 7.796464ms to wait for apiserver health ...
	I0421 18:43:17.837872   22327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:43:18.006236   22327 request.go:629] Waited for 168.289504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.006290   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.006295   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.006302   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.006305   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.012881   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:18.017830   22327 system_pods.go:59] 17 kube-system pods found
	I0421 18:43:18.017859   22327 system_pods.go:61] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:43:18.017870   22327 system_pods.go:61] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:43:18.017878   22327 system_pods.go:61] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:43:18.017881   22327 system_pods.go:61] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:43:18.017885   22327 system_pods.go:61] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:43:18.017888   22327 system_pods.go:61] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:43:18.017891   22327 system_pods.go:61] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:43:18.017894   22327 system_pods.go:61] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:43:18.017897   22327 system_pods.go:61] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:43:18.017900   22327 system_pods.go:61] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:43:18.017903   22327 system_pods.go:61] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:43:18.017906   22327 system_pods.go:61] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:43:18.017909   22327 system_pods.go:61] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:43:18.017912   22327 system_pods.go:61] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:43:18.017916   22327 system_pods.go:61] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:43:18.017918   22327 system_pods.go:61] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:43:18.017921   22327 system_pods.go:61] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:43:18.017927   22327 system_pods.go:74] duration metric: took 180.049975ms to wait for pod list to return data ...
	I0421 18:43:18.017938   22327 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:43:18.206377   22327 request.go:629] Waited for 188.343612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:43:18.206447   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:43:18.206455   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.206464   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.206472   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.209855   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:43:18.210091   22327 default_sa.go:45] found service account: "default"
	I0421 18:43:18.210111   22327 default_sa.go:55] duration metric: took 192.167076ms for default service account to be created ...
	I0421 18:43:18.210123   22327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:43:18.406546   22327 request.go:629] Waited for 196.356952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.406625   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:43:18.406630   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.406637   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.406644   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.412468   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:43:18.417535   22327 system_pods.go:86] 17 kube-system pods found
	I0421 18:43:18.417558   22327 system_pods.go:89] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:43:18.417563   22327 system_pods.go:89] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:43:18.417568   22327 system_pods.go:89] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:43:18.417572   22327 system_pods.go:89] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:43:18.417576   22327 system_pods.go:89] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:43:18.417581   22327 system_pods.go:89] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:43:18.417586   22327 system_pods.go:89] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:43:18.417590   22327 system_pods.go:89] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:43:18.417594   22327 system_pods.go:89] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:43:18.417598   22327 system_pods.go:89] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:43:18.417602   22327 system_pods.go:89] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:43:18.417607   22327 system_pods.go:89] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:43:18.417610   22327 system_pods.go:89] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:43:18.417617   22327 system_pods.go:89] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:43:18.417620   22327 system_pods.go:89] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:43:18.417623   22327 system_pods.go:89] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:43:18.417633   22327 system_pods.go:89] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:43:18.417640   22327 system_pods.go:126] duration metric: took 207.510678ms to wait for k8s-apps to be running ...
	I0421 18:43:18.417649   22327 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:43:18.417688   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:43:18.435157   22327 system_svc.go:56] duration metric: took 17.498178ms WaitForService to wait for kubelet
	I0421 18:43:18.435194   22327 kubeadm.go:576] duration metric: took 19.116202297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:43:18.435214   22327 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:43:18.606624   22327 request.go:629] Waited for 171.332169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0421 18:43:18.606699   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0421 18:43:18.606705   22327 round_trippers.go:469] Request Headers:
	I0421 18:43:18.606713   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:43:18.606723   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:43:18.613229   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:43:18.614132   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:43:18.614155   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:43:18.614167   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:43:18.614171   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:43:18.614175   22327 node_conditions.go:105] duration metric: took 178.956677ms to run NodePressure ...
	I0421 18:43:18.614186   22327 start.go:240] waiting for startup goroutines ...
	I0421 18:43:18.614207   22327 start.go:254] writing updated cluster config ...
	I0421 18:43:18.616231   22327 out.go:177] 
	I0421 18:43:18.618028   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:43:18.618130   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:43:18.620030   22327 out.go:177] * Starting "ha-113226-m03" control-plane node in "ha-113226" cluster
	I0421 18:43:18.621642   22327 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:43:18.621669   22327 cache.go:56] Caching tarball of preloaded images
	I0421 18:43:18.621783   22327 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:43:18.621798   22327 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:43:18.621932   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:43:18.622141   22327 start.go:360] acquireMachinesLock for ha-113226-m03: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:43:18.622184   22327 start.go:364] duration metric: took 23.454µs to acquireMachinesLock for "ha-113226-m03"
	I0421 18:43:18.622201   22327 start.go:93] Provisioning new machine with config: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:43:18.622327   22327 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0421 18:43:18.623934   22327 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 18:43:18.624010   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:18.624040   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:18.638967   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I0421 18:43:18.639331   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:18.639782   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:18.639801   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:18.640111   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:18.640292   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:18.640442   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:18.640574   22327 start.go:159] libmachine.API.Create for "ha-113226" (driver="kvm2")
	I0421 18:43:18.640605   22327 client.go:168] LocalClient.Create starting
	I0421 18:43:18.640635   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 18:43:18.640665   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:43:18.640679   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:43:18.640725   22327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 18:43:18.640745   22327 main.go:141] libmachine: Decoding PEM data...
	I0421 18:43:18.640756   22327 main.go:141] libmachine: Parsing certificate...
	I0421 18:43:18.640772   22327 main.go:141] libmachine: Running pre-create checks...
	I0421 18:43:18.640781   22327 main.go:141] libmachine: (ha-113226-m03) Calling .PreCreateCheck
	I0421 18:43:18.640931   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetConfigRaw
	I0421 18:43:18.641262   22327 main.go:141] libmachine: Creating machine...
	I0421 18:43:18.641275   22327 main.go:141] libmachine: (ha-113226-m03) Calling .Create
	I0421 18:43:18.641396   22327 main.go:141] libmachine: (ha-113226-m03) Creating KVM machine...
	I0421 18:43:18.642673   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found existing default KVM network
	I0421 18:43:18.642837   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found existing private KVM network mk-ha-113226
	I0421 18:43:18.642973   22327 main.go:141] libmachine: (ha-113226-m03) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03 ...
	I0421 18:43:18.642998   22327 main.go:141] libmachine: (ha-113226-m03) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:43:18.643012   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:18.642938   23334 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:43:18.643087   22327 main.go:141] libmachine: (ha-113226-m03) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:43:18.862514   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:18.862411   23334 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa...
	I0421 18:43:19.041531   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:19.041385   23334 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/ha-113226-m03.rawdisk...
	I0421 18:43:19.041571   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Writing magic tar header
	I0421 18:43:19.041587   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Writing SSH key tar header
	I0421 18:43:19.041604   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:19.041529   23334 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03 ...
	I0421 18:43:19.041695   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03
	I0421 18:43:19.041732   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 18:43:19.041747   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03 (perms=drwx------)
	I0421 18:43:19.041759   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:43:19.041772   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 18:43:19.041791   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 18:43:19.041811   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 18:43:19.041826   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 18:43:19.041848   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 18:43:19.041868   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 18:43:19.041884   22327 main.go:141] libmachine: (ha-113226-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 18:43:19.041901   22327 main.go:141] libmachine: (ha-113226-m03) Creating domain...
	I0421 18:43:19.041917   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home/jenkins
	I0421 18:43:19.041927   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Checking permissions on dir: /home
	I0421 18:43:19.041958   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Skipping /home - not owner
	I0421 18:43:19.042749   22327 main.go:141] libmachine: (ha-113226-m03) define libvirt domain using xml: 
	I0421 18:43:19.042770   22327 main.go:141] libmachine: (ha-113226-m03) <domain type='kvm'>
	I0421 18:43:19.042782   22327 main.go:141] libmachine: (ha-113226-m03)   <name>ha-113226-m03</name>
	I0421 18:43:19.042790   22327 main.go:141] libmachine: (ha-113226-m03)   <memory unit='MiB'>2200</memory>
	I0421 18:43:19.042799   22327 main.go:141] libmachine: (ha-113226-m03)   <vcpu>2</vcpu>
	I0421 18:43:19.042813   22327 main.go:141] libmachine: (ha-113226-m03)   <features>
	I0421 18:43:19.042821   22327 main.go:141] libmachine: (ha-113226-m03)     <acpi/>
	I0421 18:43:19.042829   22327 main.go:141] libmachine: (ha-113226-m03)     <apic/>
	I0421 18:43:19.042842   22327 main.go:141] libmachine: (ha-113226-m03)     <pae/>
	I0421 18:43:19.042857   22327 main.go:141] libmachine: (ha-113226-m03)     
	I0421 18:43:19.042870   22327 main.go:141] libmachine: (ha-113226-m03)   </features>
	I0421 18:43:19.042883   22327 main.go:141] libmachine: (ha-113226-m03)   <cpu mode='host-passthrough'>
	I0421 18:43:19.042895   22327 main.go:141] libmachine: (ha-113226-m03)   
	I0421 18:43:19.042908   22327 main.go:141] libmachine: (ha-113226-m03)   </cpu>
	I0421 18:43:19.042920   22327 main.go:141] libmachine: (ha-113226-m03)   <os>
	I0421 18:43:19.042938   22327 main.go:141] libmachine: (ha-113226-m03)     <type>hvm</type>
	I0421 18:43:19.042950   22327 main.go:141] libmachine: (ha-113226-m03)     <boot dev='cdrom'/>
	I0421 18:43:19.042963   22327 main.go:141] libmachine: (ha-113226-m03)     <boot dev='hd'/>
	I0421 18:43:19.042976   22327 main.go:141] libmachine: (ha-113226-m03)     <bootmenu enable='no'/>
	I0421 18:43:19.042987   22327 main.go:141] libmachine: (ha-113226-m03)   </os>
	I0421 18:43:19.042999   22327 main.go:141] libmachine: (ha-113226-m03)   <devices>
	I0421 18:43:19.043016   22327 main.go:141] libmachine: (ha-113226-m03)     <disk type='file' device='cdrom'>
	I0421 18:43:19.043038   22327 main.go:141] libmachine: (ha-113226-m03)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/boot2docker.iso'/>
	I0421 18:43:19.043051   22327 main.go:141] libmachine: (ha-113226-m03)       <target dev='hdc' bus='scsi'/>
	I0421 18:43:19.043069   22327 main.go:141] libmachine: (ha-113226-m03)       <readonly/>
	I0421 18:43:19.043090   22327 main.go:141] libmachine: (ha-113226-m03)     </disk>
	I0421 18:43:19.043109   22327 main.go:141] libmachine: (ha-113226-m03)     <disk type='file' device='disk'>
	I0421 18:43:19.043123   22327 main.go:141] libmachine: (ha-113226-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 18:43:19.043138   22327 main.go:141] libmachine: (ha-113226-m03)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/ha-113226-m03.rawdisk'/>
	I0421 18:43:19.043151   22327 main.go:141] libmachine: (ha-113226-m03)       <target dev='hda' bus='virtio'/>
	I0421 18:43:19.043167   22327 main.go:141] libmachine: (ha-113226-m03)     </disk>
	I0421 18:43:19.043182   22327 main.go:141] libmachine: (ha-113226-m03)     <interface type='network'>
	I0421 18:43:19.043191   22327 main.go:141] libmachine: (ha-113226-m03)       <source network='mk-ha-113226'/>
	I0421 18:43:19.043198   22327 main.go:141] libmachine: (ha-113226-m03)       <model type='virtio'/>
	I0421 18:43:19.043203   22327 main.go:141] libmachine: (ha-113226-m03)     </interface>
	I0421 18:43:19.043211   22327 main.go:141] libmachine: (ha-113226-m03)     <interface type='network'>
	I0421 18:43:19.043222   22327 main.go:141] libmachine: (ha-113226-m03)       <source network='default'/>
	I0421 18:43:19.043229   22327 main.go:141] libmachine: (ha-113226-m03)       <model type='virtio'/>
	I0421 18:43:19.043235   22327 main.go:141] libmachine: (ha-113226-m03)     </interface>
	I0421 18:43:19.043246   22327 main.go:141] libmachine: (ha-113226-m03)     <serial type='pty'>
	I0421 18:43:19.043252   22327 main.go:141] libmachine: (ha-113226-m03)       <target port='0'/>
	I0421 18:43:19.043262   22327 main.go:141] libmachine: (ha-113226-m03)     </serial>
	I0421 18:43:19.043281   22327 main.go:141] libmachine: (ha-113226-m03)     <console type='pty'>
	I0421 18:43:19.043302   22327 main.go:141] libmachine: (ha-113226-m03)       <target type='serial' port='0'/>
	I0421 18:43:19.043324   22327 main.go:141] libmachine: (ha-113226-m03)     </console>
	I0421 18:43:19.043333   22327 main.go:141] libmachine: (ha-113226-m03)     <rng model='virtio'>
	I0421 18:43:19.043344   22327 main.go:141] libmachine: (ha-113226-m03)       <backend model='random'>/dev/random</backend>
	I0421 18:43:19.043352   22327 main.go:141] libmachine: (ha-113226-m03)     </rng>
	I0421 18:43:19.043361   22327 main.go:141] libmachine: (ha-113226-m03)     
	I0421 18:43:19.043369   22327 main.go:141] libmachine: (ha-113226-m03)     
	I0421 18:43:19.043382   22327 main.go:141] libmachine: (ha-113226-m03)   </devices>
	I0421 18:43:19.043396   22327 main.go:141] libmachine: (ha-113226-m03) </domain>
	I0421 18:43:19.043410   22327 main.go:141] libmachine: (ha-113226-m03) 
	I0421 18:43:19.050231   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:bb:88:d2 in network default
	I0421 18:43:19.050893   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:19.050922   22327 main.go:141] libmachine: (ha-113226-m03) Ensuring networks are active...
	I0421 18:43:19.051681   22327 main.go:141] libmachine: (ha-113226-m03) Ensuring network default is active
	I0421 18:43:19.052028   22327 main.go:141] libmachine: (ha-113226-m03) Ensuring network mk-ha-113226 is active
	I0421 18:43:19.052513   22327 main.go:141] libmachine: (ha-113226-m03) Getting domain xml...
	I0421 18:43:19.053201   22327 main.go:141] libmachine: (ha-113226-m03) Creating domain...
	I0421 18:43:20.282657   22327 main.go:141] libmachine: (ha-113226-m03) Waiting to get IP...
	I0421 18:43:20.283405   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:20.283765   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:20.283799   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:20.283754   23334 retry.go:31] will retry after 263.965209ms: waiting for machine to come up
	I0421 18:43:20.549193   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:20.549586   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:20.549612   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:20.549548   23334 retry.go:31] will retry after 307.648351ms: waiting for machine to come up
	I0421 18:43:20.858779   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:20.859186   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:20.859208   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:20.859147   23334 retry.go:31] will retry after 478.221684ms: waiting for machine to come up
	I0421 18:43:21.338809   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:21.339242   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:21.339264   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:21.339199   23334 retry.go:31] will retry after 454.481902ms: waiting for machine to come up
	I0421 18:43:21.794928   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:21.795348   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:21.795379   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:21.795316   23334 retry.go:31] will retry after 659.132545ms: waiting for machine to come up
	I0421 18:43:22.456306   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:22.456865   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:22.456889   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:22.456832   23334 retry.go:31] will retry after 627.99293ms: waiting for machine to come up
	I0421 18:43:23.086265   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:23.086778   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:23.086807   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:23.086727   23334 retry.go:31] will retry after 949.480394ms: waiting for machine to come up
	I0421 18:43:24.038224   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:24.038692   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:24.038717   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:24.038652   23334 retry.go:31] will retry after 1.382407958s: waiting for machine to come up
	I0421 18:43:25.423095   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:25.423529   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:25.423558   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:25.423494   23334 retry.go:31] will retry after 1.171639093s: waiting for machine to come up
	I0421 18:43:26.596533   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:26.596951   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:26.596994   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:26.596935   23334 retry.go:31] will retry after 2.17194928s: waiting for machine to come up
	I0421 18:43:28.770642   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:28.771108   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:28.771130   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:28.771055   23334 retry.go:31] will retry after 2.597239918s: waiting for machine to come up
	I0421 18:43:31.371688   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:31.372148   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:31.372185   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:31.372084   23334 retry.go:31] will retry after 2.290553278s: waiting for machine to come up
	I0421 18:43:33.664411   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:33.664824   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:33.664857   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:33.664778   23334 retry.go:31] will retry after 3.791671556s: waiting for machine to come up
	I0421 18:43:37.459069   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:37.459525   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find current IP address of domain ha-113226-m03 in network mk-ha-113226
	I0421 18:43:37.459554   22327 main.go:141] libmachine: (ha-113226-m03) DBG | I0421 18:43:37.459485   23334 retry.go:31] will retry after 3.846723062s: waiting for machine to come up
	I0421 18:43:41.307401   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.307927   22327 main.go:141] libmachine: (ha-113226-m03) Found IP for machine: 192.168.39.221
	I0421 18:43:41.307967   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has current primary IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.307983   22327 main.go:141] libmachine: (ha-113226-m03) Reserving static IP address...
	I0421 18:43:41.308381   22327 main.go:141] libmachine: (ha-113226-m03) DBG | unable to find host DHCP lease matching {name: "ha-113226-m03", mac: "52:54:00:f7:32:68", ip: "192.168.39.221"} in network mk-ha-113226
	I0421 18:43:41.385983   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Getting to WaitForSSH function...
	I0421 18:43:41.386018   22327 main.go:141] libmachine: (ha-113226-m03) Reserved static IP address: 192.168.39.221
	I0421 18:43:41.386032   22327 main.go:141] libmachine: (ha-113226-m03) Waiting for SSH to be available...
	I0421 18:43:41.388666   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.389103   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.389134   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.389284   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Using SSH client type: external
	I0421 18:43:41.389311   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa (-rw-------)
	I0421 18:43:41.389345   22327 main.go:141] libmachine: (ha-113226-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 18:43:41.389359   22327 main.go:141] libmachine: (ha-113226-m03) DBG | About to run SSH command:
	I0421 18:43:41.389376   22327 main.go:141] libmachine: (ha-113226-m03) DBG | exit 0
	I0421 18:43:41.522248   22327 main.go:141] libmachine: (ha-113226-m03) DBG | SSH cmd err, output: <nil>: 
	I0421 18:43:41.522522   22327 main.go:141] libmachine: (ha-113226-m03) KVM machine creation complete!
	I0421 18:43:41.522825   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetConfigRaw
	I0421 18:43:41.523348   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:41.523558   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:41.523747   22327 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 18:43:41.523767   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:43:41.525063   22327 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 18:43:41.525075   22327 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 18:43:41.525080   22327 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 18:43:41.525086   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.527574   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.528023   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.528052   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.528226   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.528398   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.528570   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.528716   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.528890   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.529075   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.529086   22327 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 18:43:41.633923   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:43:41.633949   22327 main.go:141] libmachine: Detecting the provisioner...
	I0421 18:43:41.633959   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.636679   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.637099   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.637134   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.637304   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.637530   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.637720   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.637851   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.638001   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.638235   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.638254   22327 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 18:43:41.747401   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 18:43:41.747461   22327 main.go:141] libmachine: found compatible host: buildroot
	I0421 18:43:41.747467   22327 main.go:141] libmachine: Provisioning with buildroot...
	I0421 18:43:41.747474   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:41.747724   22327 buildroot.go:166] provisioning hostname "ha-113226-m03"
	I0421 18:43:41.747753   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:41.747907   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.750396   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.750782   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.750810   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.750955   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.751129   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.751296   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.751435   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.751598   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.751775   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.751792   22327 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226-m03 && echo "ha-113226-m03" | sudo tee /etc/hostname
	I0421 18:43:41.871320   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226-m03
	
	I0421 18:43:41.871347   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:41.874096   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.874440   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:41.874471   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:41.874679   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:41.874906   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.875113   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:41.875287   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:41.875492   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:41.875712   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:41.875741   22327 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:43:42.002388   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:43:42.002422   22327 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:43:42.002442   22327 buildroot.go:174] setting up certificates
	I0421 18:43:42.002453   22327 provision.go:84] configureAuth start
	I0421 18:43:42.002465   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetMachineName
	I0421 18:43:42.002691   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:42.005576   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.006028   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.006049   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.006256   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.008704   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.009114   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.009159   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.009338   22327 provision.go:143] copyHostCerts
	I0421 18:43:42.009373   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:43:42.009402   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:43:42.009409   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:43:42.009471   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:43:42.009542   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:43:42.009560   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:43:42.009567   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:43:42.009590   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:43:42.009630   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:43:42.009645   22327 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:43:42.009652   22327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:43:42.009671   22327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:43:42.009718   22327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226-m03 san=[127.0.0.1 192.168.39.221 ha-113226-m03 localhost minikube]
	I0421 18:43:42.180379   22327 provision.go:177] copyRemoteCerts
	I0421 18:43:42.180433   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:43:42.180453   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.183320   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.183629   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.183661   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.183869   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.184065   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.184239   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.184369   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:42.269864   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:43:42.269938   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 18:43:42.299481   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:43:42.299551   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:43:42.329886   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:43:42.329960   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 18:43:42.360793   22327 provision.go:87] duration metric: took 358.329156ms to configureAuth
	I0421 18:43:42.360820   22327 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:43:42.361005   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:43:42.361069   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.364065   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.364454   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.364501   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.364695   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.364905   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.365070   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.365220   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.365399   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:42.365559   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:42.365575   22327 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:43:42.652016   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:43:42.652050   22327 main.go:141] libmachine: Checking connection to Docker...
	I0421 18:43:42.652060   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetURL
	I0421 18:43:42.653479   22327 main.go:141] libmachine: (ha-113226-m03) DBG | Using libvirt version 6000000
	I0421 18:43:42.655459   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.655853   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.655878   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.656062   22327 main.go:141] libmachine: Docker is up and running!
	I0421 18:43:42.656076   22327 main.go:141] libmachine: Reticulating splines...
	I0421 18:43:42.656083   22327 client.go:171] duration metric: took 24.015468696s to LocalClient.Create
	I0421 18:43:42.656109   22327 start.go:167] duration metric: took 24.015535075s to libmachine.API.Create "ha-113226"
	I0421 18:43:42.656118   22327 start.go:293] postStartSetup for "ha-113226-m03" (driver="kvm2")
	I0421 18:43:42.656127   22327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:43:42.656143   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.656382   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:43:42.656406   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.658613   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.658954   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.658979   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.659087   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.659251   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.659404   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.659533   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:42.741615   22327 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:43:42.746528   22327 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:43:42.746553   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:43:42.746630   22327 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:43:42.746714   22327 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:43:42.746724   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:43:42.746799   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:43:42.758627   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:43:42.788877   22327 start.go:296] duration metric: took 132.746102ms for postStartSetup
	I0421 18:43:42.788939   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetConfigRaw
	I0421 18:43:42.789498   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:42.792329   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.792825   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.792856   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.793127   22327 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:43:42.793376   22327 start.go:128] duration metric: took 24.171035236s to createHost
	I0421 18:43:42.793404   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.795760   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.796167   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.796195   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.796300   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.796487   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.796619   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.796820   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.796984   22327 main.go:141] libmachine: Using SSH client type: native
	I0421 18:43:42.797185   22327 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0421 18:43:42.797196   22327 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:43:42.903234   22327 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713725022.872356349
	
	I0421 18:43:42.903256   22327 fix.go:216] guest clock: 1713725022.872356349
	I0421 18:43:42.903266   22327 fix.go:229] Guest: 2024-04-21 18:43:42.872356349 +0000 UTC Remote: 2024-04-21 18:43:42.793390396 +0000 UTC m=+211.490544853 (delta=78.965953ms)
	I0421 18:43:42.903285   22327 fix.go:200] guest clock delta is within tolerance: 78.965953ms
	I0421 18:43:42.903292   22327 start.go:83] releasing machines lock for "ha-113226-m03", held for 24.281100015s
	I0421 18:43:42.903311   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.903590   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:42.906430   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.906779   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.906811   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.908946   22327 out.go:177] * Found network options:
	I0421 18:43:42.910484   22327 out.go:177]   - NO_PROXY=192.168.39.60,192.168.39.233
	W0421 18:43:42.911946   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 18:43:42.911968   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:43:42.911980   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.912498   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.912713   22327 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:43:42.912814   22327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:43:42.912853   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	W0421 18:43:42.912871   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 18:43:42.912895   22327 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 18:43:42.912962   22327 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:43:42.912984   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:43:42.915561   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.915771   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.915967   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.916010   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.916136   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:42.916156   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.916168   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:42.916344   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.916351   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:43:42.916531   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.916535   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:43:42.916682   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:43:42.916691   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:42.916792   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:43:43.166848   22327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:43:43.173710   22327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:43:43.173774   22327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:43:43.190997   22327 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:43:43.191022   22327 start.go:494] detecting cgroup driver to use...
	I0421 18:43:43.191087   22327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:43:43.208131   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:43:43.223716   22327 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:43:43.223775   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:43:43.240229   22327 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:43:43.256732   22327 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:43:43.372542   22327 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:43:43.549547   22327 docker.go:233] disabling docker service ...
	I0421 18:43:43.549626   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:43:43.575795   22327 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:43:43.593248   22327 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:43:43.735032   22327 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:43:43.862216   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:43:43.878589   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:43:43.899876   22327 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:43:43.899938   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.912507   22327 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:43:43.912597   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.924983   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.937626   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.950230   22327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:43:43.963616   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.976179   22327 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:43.997570   22327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:43:44.009883   22327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:43:44.020868   22327 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 18:43:44.020948   22327 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 18:43:44.036454   22327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:43:44.047454   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:43:44.177987   22327 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:43:44.344056   22327 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:43:44.344137   22327 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:43:44.349818   22327 start.go:562] Will wait 60s for crictl version
	I0421 18:43:44.349874   22327 ssh_runner.go:195] Run: which crictl
	I0421 18:43:44.354558   22327 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:43:44.402975   22327 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:43:44.403064   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:43:44.434503   22327 ssh_runner.go:195] Run: crio --version
	I0421 18:43:44.473837   22327 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:43:44.475236   22327 out.go:177]   - env NO_PROXY=192.168.39.60
	I0421 18:43:44.476671   22327 out.go:177]   - env NO_PROXY=192.168.39.60,192.168.39.233
	I0421 18:43:44.477908   22327 main.go:141] libmachine: (ha-113226-m03) Calling .GetIP
	I0421 18:43:44.480300   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:44.480620   22327 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:43:44.480649   22327 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:43:44.480784   22327 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:43:44.485783   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:43:44.500820   22327 mustload.go:65] Loading cluster: ha-113226
	I0421 18:43:44.501042   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:43:44.501348   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:44.501387   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:44.516249   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0421 18:43:44.516709   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:44.517189   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:44.517210   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:44.517467   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:44.517624   22327 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:43:44.519069   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:43:44.519342   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:44.519378   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:44.534194   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0421 18:43:44.534640   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:44.535053   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:44.535075   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:44.535381   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:44.535569   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:43:44.535728   22327 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.221
	I0421 18:43:44.535740   22327 certs.go:194] generating shared ca certs ...
	I0421 18:43:44.535764   22327 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:43:44.535902   22327 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:43:44.535950   22327 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:43:44.535962   22327 certs.go:256] generating profile certs ...
	I0421 18:43:44.536083   22327 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:43:44.536110   22327 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593
	I0421 18:43:44.536130   22327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.233 192.168.39.221 192.168.39.254]
	I0421 18:43:44.643314   22327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593 ...
	I0421 18:43:44.643344   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593: {Name:mkb2f3103261430dd6185de67171ae27d3e41d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:43:44.643520   22327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593 ...
	I0421 18:43:44.643532   22327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593: {Name:mk42802e6d09fbf06761adc99c0883feaac0109f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:43:44.643605   22327 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.acc58593 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:43:44.643733   22327 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.acc58593 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:43:44.643856   22327 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:43:44.643871   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:43:44.643882   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:43:44.643893   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:43:44.643906   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:43:44.643918   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:43:44.643930   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:43:44.643942   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:43:44.643954   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:43:44.644002   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:43:44.644028   22327 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:43:44.644037   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:43:44.644062   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:43:44.644089   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:43:44.644110   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:43:44.644146   22327 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:43:44.644171   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:43:44.644185   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:44.644197   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:43:44.644228   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:43:44.647457   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:44.647873   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:43:44.647903   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:44.648057   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:43:44.648242   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:43:44.648401   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:43:44.648521   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:43:44.722414   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 18:43:44.728835   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 18:43:44.743616   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 18:43:44.748556   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 18:43:44.761600   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 18:43:44.769790   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 18:43:44.784551   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 18:43:44.789854   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 18:43:44.804017   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 18:43:44.809447   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 18:43:44.825449   22327 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 18:43:44.830431   22327 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0421 18:43:44.844752   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:43:44.873184   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:43:44.901115   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:43:44.928023   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:43:44.956946   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0421 18:43:44.983943   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:43:45.012358   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:43:45.042297   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:43:45.068975   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:43:45.098030   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:43:45.126506   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:43:45.155445   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 18:43:45.176909   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 18:43:45.197454   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 18:43:45.216822   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 18:43:45.237271   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 18:43:45.256858   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0421 18:43:45.276582   22327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 18:43:45.295927   22327 ssh_runner.go:195] Run: openssl version
	I0421 18:43:45.302391   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:43:45.316327   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:43:45.321837   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:43:45.321907   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:43:45.328402   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:43:45.342235   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:43:45.356656   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:43:45.362482   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:43:45.362547   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:43:45.369213   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:43:45.382986   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:43:45.396286   22327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:45.401757   22327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:45.401827   22327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:43:45.408560   22327 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:43:45.423297   22327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:43:45.428693   22327 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:43:45.428750   22327 kubeadm.go:928] updating node {m03 192.168.39.221 8443 v1.30.0 crio true true} ...
	I0421 18:43:45.428828   22327 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:43:45.428854   22327 kube-vip.go:111] generating kube-vip config ...
	I0421 18:43:45.428889   22327 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:43:45.450912   22327 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:43:45.450971   22327 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:43:45.451026   22327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:43:45.464213   22327 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 18:43:45.464285   22327 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 18:43:45.477932   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0421 18:43:45.477944   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0421 18:43:45.477965   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:43:45.477984   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:43:45.477933   22327 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0421 18:43:45.478022   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 18:43:45.478041   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:43:45.478139   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 18:43:45.499279   22327 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:43:45.499314   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 18:43:45.499345   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 18:43:45.499364   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 18:43:45.499387   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 18:43:45.499456   22327 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 18:43:45.513899   22327 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 18:43:45.513933   22327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 18:43:46.544283   22327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 18:43:46.554876   22327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 18:43:46.573639   22327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:43:46.592290   22327 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 18:43:46.612620   22327 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:43:46.617244   22327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:43:46.631372   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:43:46.768336   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:43:46.799710   22327 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:43:46.800152   22327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:43:46.800214   22327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:43:46.817359   22327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0421 18:43:46.817791   22327 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:43:46.818377   22327 main.go:141] libmachine: Using API Version  1
	I0421 18:43:46.818405   22327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:43:46.818774   22327 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:43:46.818996   22327 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:43:46.819154   22327 start.go:316] joinCluster: &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:43:46.819272   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 18:43:46.819293   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:43:46.822362   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:46.822903   22327 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:43:46.822929   22327 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:43:46.823133   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:43:46.823326   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:43:46.823457   22327 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:43:46.823638   22327 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:43:46.999726   22327 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:43:46.999762   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiuvl6.fmcttkgnokee07jj --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m03 --control-plane --apiserver-advertise-address=192.168.39.221 --apiserver-bind-port=8443"
	I0421 18:44:11.071476   22327 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aiuvl6.fmcttkgnokee07jj --discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-113226-m03 --control-plane --apiserver-advertise-address=192.168.39.221 --apiserver-bind-port=8443": (24.071686132s)
	I0421 18:44:11.071513   22327 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 18:44:11.767641   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-113226-m03 minikube.k8s.io/updated_at=2024_04_21T18_44_11_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-113226 minikube.k8s.io/primary=false
	I0421 18:44:11.929445   22327 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-113226-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 18:44:12.054158   22327 start.go:318] duration metric: took 25.234998723s to joinCluster
	I0421 18:44:12.054228   22327 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 18:44:12.056018   22327 out.go:177] * Verifying Kubernetes components...
	I0421 18:44:12.054640   22327 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:44:12.058119   22327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:44:12.361693   22327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:44:12.431649   22327 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:44:12.431974   22327 kapi.go:59] client config for ha-113226: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 18:44:12.432051   22327 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.60:8443
	I0421 18:44:12.432352   22327 node_ready.go:35] waiting up to 6m0s for node "ha-113226-m03" to be "Ready" ...
	I0421 18:44:12.432433   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:12.432443   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:12.432454   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:12.432462   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:12.436251   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:12.932518   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:12.932551   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:12.932561   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:12.932568   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:12.936225   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:13.432792   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:13.432813   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:13.432821   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:13.432825   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:13.436817   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:13.933171   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:13.933199   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:13.933215   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:13.933222   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:13.937491   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:14.433366   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:14.433400   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:14.433421   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:14.433428   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:14.437622   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:14.438984   22327 node_ready.go:53] node "ha-113226-m03" has status "Ready":"False"
	I0421 18:44:14.933343   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:14.933365   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:14.933373   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:14.933377   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:14.937421   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:15.433575   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:15.433603   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:15.433615   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:15.433621   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:15.437080   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:15.933567   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:15.933597   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:15.933609   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:15.933614   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:15.937640   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:16.433081   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:16.433104   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:16.433113   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:16.433118   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:16.436457   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:16.932582   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:16.932621   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:16.932628   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:16.932633   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:16.936420   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:16.937432   22327 node_ready.go:53] node "ha-113226-m03" has status "Ready":"False"
	I0421 18:44:17.432839   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:17.432859   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:17.432867   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:17.432871   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:17.437286   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:17.932566   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:17.932586   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:17.932594   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:17.932597   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:17.936095   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:18.433341   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:18.433367   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:18.433378   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:18.433382   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:18.437205   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:18.932909   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:18.932934   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:18.932943   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:18.932949   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:18.936607   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.432617   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:19.432636   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.432643   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.432647   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.436425   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.436981   22327 node_ready.go:53] node "ha-113226-m03" has status "Ready":"False"
	I0421 18:44:19.933302   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:19.933328   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.933339   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.933348   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.942552   22327 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 18:44:19.945571   22327 node_ready.go:49] node "ha-113226-m03" has status "Ready":"True"
	I0421 18:44:19.945597   22327 node_ready.go:38] duration metric: took 7.513225345s for node "ha-113226-m03" to be "Ready" ...
	I0421 18:44:19.945608   22327 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:44:19.945695   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:19.945709   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.945718   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.945723   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.952837   22327 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 18:44:19.959480   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.959547   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n8sbt
	I0421 18:44:19.959552   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.959560   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.959564   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.962025   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.962771   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:19.962790   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.962800   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.962804   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.965758   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.966583   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.966599   22327 pod_ready.go:81] duration metric: took 7.098468ms for pod "coredns-7db6d8ff4d-n8sbt" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.966609   22327 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.966655   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zhskp
	I0421 18:44:19.966662   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.966669   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.966677   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.970400   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.971188   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:19.971204   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.971214   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.971220   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.974194   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.974801   22327 pod_ready.go:92] pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.974818   22327 pod_ready.go:81] duration metric: took 8.203908ms for pod "coredns-7db6d8ff4d-zhskp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.974827   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.974877   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226
	I0421 18:44:19.974886   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.974892   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.974896   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.977515   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.978120   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:19.978133   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.978140   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.978144   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.981261   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:19.981982   22327 pod_ready.go:92] pod "etcd-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.982000   22327 pod_ready.go:81] duration metric: took 7.165713ms for pod "etcd-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.982013   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.982086   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m02
	I0421 18:44:19.982096   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.982107   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.982112   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.984810   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.985491   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:19.985505   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:19.985511   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:19.985515   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:19.988496   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:19.989029   22327 pod_ready.go:92] pod "etcd-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:19.989048   22327 pod_ready.go:81] duration metric: took 7.026733ms for pod "etcd-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:19.989059   22327 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:20.133355   22327 request.go:629] Waited for 144.227929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.133420   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.133426   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.133441   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.133454   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.137471   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:20.333548   22327 request.go:629] Waited for 195.282525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.333600   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.333605   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.333615   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.333620   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.337343   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:20.533819   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.533869   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.533882   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.533887   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.538505   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:20.733818   22327 request.go:629] Waited for 194.422606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.733920   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:20.733944   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.733954   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.733961   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.737480   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:20.990210   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:20.990229   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:20.990239   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:20.990245   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:20.993884   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.133399   22327 request.go:629] Waited for 138.234839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.133466   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.133471   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.133479   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.133484   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.137266   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.489966   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:21.489990   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.489999   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.490003   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.493587   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.533751   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.533795   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.533807   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.533812   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.537329   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.989861   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:21.989887   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.989899   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.989909   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.993796   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:21.994616   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:21.994633   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:21.994644   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:21.994649   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:21.997624   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:21.998318   22327 pod_ready.go:102] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 18:44:22.490076   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:22.490101   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.490109   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.490119   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.493771   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:22.494392   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:22.494408   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.494415   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.494419   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.497280   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:22.989879   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:22.989902   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.989913   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.989921   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.993750   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:22.994676   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:22.994693   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:22.994700   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:22.994704   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:22.997690   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:23.489283   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:23.489304   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.489313   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.489322   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.492851   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:23.493666   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:23.493681   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.493688   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.493691   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.496713   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:23.989930   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:23.989956   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.989966   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.989970   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.994038   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:23.994645   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:23.994661   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:23.994674   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:23.994678   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:23.997657   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:24.489587   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:24.489606   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.489614   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.489618   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.494001   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:24.495273   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:24.495287   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.495294   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.495298   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.498651   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:24.499235   22327 pod_ready.go:102] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 18:44:24.989479   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:24.989501   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.989509   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.989513   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.994556   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:44:24.995313   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:24.995329   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:24.995337   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:24.995342   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:24.998761   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:25.489739   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:25.489762   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.489773   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.489780   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.493243   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:25.494432   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:25.494446   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.494452   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.494466   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.497751   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:25.989969   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:25.989998   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.990010   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.990015   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.994140   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:25.994961   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:25.994981   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:25.994990   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:25.994995   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:25.997802   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:26.489608   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:26.489631   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.489640   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.489644   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.493502   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:26.494259   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:26.494275   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.494290   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.494297   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.497220   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:26.989519   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:26.989543   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.989554   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.989558   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.993454   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:26.994200   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:26.994215   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:26.994224   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:26.994232   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:26.997574   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:26.998181   22327 pod_ready.go:102] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 18:44:27.490038   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:27.490074   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.490087   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.490092   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.493968   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:27.494943   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:27.494970   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.494980   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.494987   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.497898   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:27.989844   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:27.989866   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.989873   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.989876   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.993937   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:27.994998   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:27.995016   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:27.995021   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:27.995025   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:27.997945   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:28.489895   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/etcd-ha-113226-m03
	I0421 18:44:28.489922   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.489933   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.489938   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.495192   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:44:28.496037   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:28.496053   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.496060   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.496064   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.500285   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.501157   22327 pod_ready.go:92] pod "etcd-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.501184   22327 pod_ready.go:81] duration metric: took 8.512116199s for pod "etcd-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.501207   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.501290   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226
	I0421 18:44:28.501299   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.501309   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.501315   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.504839   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:28.505473   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:28.505491   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.505499   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.505505   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.508642   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:28.509296   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.509319   22327 pod_ready.go:81] duration metric: took 8.098376ms for pod "kube-apiserver-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.509331   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.509404   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226-m02
	I0421 18:44:28.509415   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.509425   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.509431   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.513904   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.514841   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:28.514860   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.514876   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.514884   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.528780   22327 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 18:44:28.529530   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.529553   22327 pod_ready.go:81] duration metric: took 20.206887ms for pod "kube-apiserver-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.529567   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.529641   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-113226-m03
	I0421 18:44:28.529657   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.529667   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.529674   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.533780   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.534539   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:28.534552   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.534560   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.534565   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.537173   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:28.537848   22327 pod_ready.go:92] pod "kube-apiserver-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.537865   22327 pod_ready.go:81] duration metric: took 8.290833ms for pod "kube-apiserver-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.537874   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.537940   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226
	I0421 18:44:28.537949   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.537955   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.537961   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.546267   22327 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 18:44:28.733368   22327 request.go:629] Waited for 186.183281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:28.733428   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:28.733439   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.733446   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.733452   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.737990   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:28.738659   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:28.738688   22327 pod_ready.go:81] duration metric: took 200.804444ms for pod "kube-controller-manager-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.738703   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:28.933540   22327 request.go:629] Waited for 194.748447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m02
	I0421 18:44:28.933612   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m02
	I0421 18:44:28.933620   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:28.933627   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:28.933633   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:28.937608   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:29.134152   22327 request.go:629] Waited for 195.312619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:29.134231   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:29.134240   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.134258   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.134267   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.137069   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:29.137738   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:29.137757   22327 pod_ready.go:81] duration metric: took 399.0412ms for pod "kube-controller-manager-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.137766   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.333759   22327 request.go:629] Waited for 195.930659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m03
	I0421 18:44:29.333834   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-113226-m03
	I0421 18:44:29.333841   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.333852   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.333863   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.337201   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:29.533627   22327 request.go:629] Waited for 195.356241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:29.533693   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:29.533699   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.533709   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.533719   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.537122   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:29.537952   22327 pod_ready.go:92] pod "kube-controller-manager-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:29.537971   22327 pod_ready.go:81] duration metric: took 400.198289ms for pod "kube-controller-manager-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.537984   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.733539   22327 request.go:629] Waited for 195.499509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:44:29.733665   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h75dp
	I0421 18:44:29.733685   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.733693   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.733699   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.738187   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:29.933611   22327 request.go:629] Waited for 194.353876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:29.933659   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:29.933663   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:29.933671   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:29.933694   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:29.937764   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:29.938451   22327 pod_ready.go:92] pod "kube-proxy-h75dp" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:29.938467   22327 pod_ready.go:81] duration metric: took 400.477351ms for pod "kube-proxy-h75dp" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:29.938490   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.133672   22327 request.go:629] Waited for 195.106299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:44:30.133719   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nsv74
	I0421 18:44:30.133724   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.133732   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.133736   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.136644   22327 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:44:30.333694   22327 request.go:629] Waited for 196.406156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:30.333775   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:30.333781   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.333797   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.333825   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.337379   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:30.338159   22327 pod_ready.go:92] pod "kube-proxy-nsv74" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:30.338178   22327 pod_ready.go:81] duration metric: took 399.676627ms for pod "kube-proxy-nsv74" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.338188   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shlwr" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.533579   22327 request.go:629] Waited for 195.338039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shlwr
	I0421 18:44:30.533661   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shlwr
	I0421 18:44:30.533672   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.533683   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.533693   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.537213   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:30.733500   22327 request.go:629] Waited for 195.285993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:30.733590   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:30.733600   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.733608   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.733612   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.737270   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:30.737856   22327 pod_ready.go:92] pod "kube-proxy-shlwr" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:30.737875   22327 pod_ready.go:81] duration metric: took 399.679446ms for pod "kube-proxy-shlwr" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.737886   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:30.934093   22327 request.go:629] Waited for 196.112407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:44:30.934164   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226
	I0421 18:44:30.934183   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:30.934203   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:30.934211   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:30.937917   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.134234   22327 request.go:629] Waited for 195.370491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:31.134293   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226
	I0421 18:44:31.134298   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.134305   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.134308   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.139413   22327 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:44:31.140426   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:31.140449   22327 pod_ready.go:81] duration metric: took 402.55556ms for pod "kube-scheduler-ha-113226" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.140461   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.333766   22327 request.go:629] Waited for 193.242241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:44:31.333890   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m02
	I0421 18:44:31.333901   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.333912   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.333922   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.337658   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.533857   22327 request.go:629] Waited for 195.427901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:31.533914   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m02
	I0421 18:44:31.533921   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.533930   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.533935   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.537324   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.537864   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:31.537882   22327 pod_ready.go:81] duration metric: took 397.413345ms for pod "kube-scheduler-ha-113226-m02" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.537891   22327 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.734018   22327 request.go:629] Waited for 196.06804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m03
	I0421 18:44:31.734124   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-113226-m03
	I0421 18:44:31.734137   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.734148   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.734158   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.738523   22327 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:44:31.933786   22327 request.go:629] Waited for 194.567052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:31.933837   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes/ha-113226-m03
	I0421 18:44:31.933842   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.933849   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.933854   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.937150   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:31.938235   22327 pod_ready.go:92] pod "kube-scheduler-ha-113226-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 18:44:31.938258   22327 pod_ready.go:81] duration metric: took 400.359928ms for pod "kube-scheduler-ha-113226-m03" in "kube-system" namespace to be "Ready" ...
	I0421 18:44:31.938274   22327 pod_ready.go:38] duration metric: took 11.992653557s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:44:31.938304   22327 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:44:31.938378   22327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:44:31.955783   22327 api_server.go:72] duration metric: took 19.901521933s to wait for apiserver process to appear ...
	I0421 18:44:31.955808   22327 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:44:31.955845   22327 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0421 18:44:31.965302   22327 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0421 18:44:31.965388   22327 round_trippers.go:463] GET https://192.168.39.60:8443/version
	I0421 18:44:31.965400   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:31.965425   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:31.965436   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:31.966525   22327 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 18:44:31.966604   22327 api_server.go:141] control plane version: v1.30.0
	I0421 18:44:31.966622   22327 api_server.go:131] duration metric: took 10.807225ms to wait for apiserver health ...
	I0421 18:44:31.966632   22327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:44:32.134013   22327 request.go:629] Waited for 167.31001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.134122   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.134134   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.134141   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.134147   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.140412   22327 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:44:32.147662   22327 system_pods.go:59] 24 kube-system pods found
	I0421 18:44:32.147685   22327 system_pods.go:61] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:44:32.147694   22327 system_pods.go:61] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:44:32.147698   22327 system_pods.go:61] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:44:32.147701   22327 system_pods.go:61] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:44:32.147704   22327 system_pods.go:61] "etcd-ha-113226-m03" [1df4d990-651f-489d-851e-025124e70edb] Running
	I0421 18:44:32.147710   22327 system_pods.go:61] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:44:32.147713   22327 system_pods.go:61] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:44:32.147716   22327 system_pods.go:61] "kindnet-rhmbs" [fe360217-fab8-4a62-ba7a-5e50131dbe19] Running
	I0421 18:44:32.147719   22327 system_pods.go:61] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:44:32.147722   22327 system_pods.go:61] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:44:32.147725   22327 system_pods.go:61] "kube-apiserver-ha-113226-m03" [5150fa0a-f4d2-4b1f-98b7-c1df0368547f] Running
	I0421 18:44:32.147733   22327 system_pods.go:61] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:44:32.147739   22327 system_pods.go:61] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:44:32.147742   22327 system_pods.go:61] "kube-controller-manager-ha-113226-m03" [5e23b988-465d-4ab7-9b63-b6b12797144f] Running
	I0421 18:44:32.147745   22327 system_pods.go:61] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:44:32.147748   22327 system_pods.go:61] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:44:32.147750   22327 system_pods.go:61] "kube-proxy-shlwr" [67a1811b-054e-4f00-9360-2fbe114b4d62] Running
	I0421 18:44:32.147753   22327 system_pods.go:61] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:44:32.147756   22327 system_pods.go:61] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:44:32.147759   22327 system_pods.go:61] "kube-scheduler-ha-113226-m03" [7b3d0da2-eec6-48c5-bd3b-76032498004a] Running
	I0421 18:44:32.147762   22327 system_pods.go:61] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:44:32.147764   22327 system_pods.go:61] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:44:32.147767   22327 system_pods.go:61] "kube-vip-ha-113226-m03" [6a55b958-1d3d-49a8-9ea2-3857e4e537a7] Running
	I0421 18:44:32.147769   22327 system_pods.go:61] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:44:32.147776   22327 system_pods.go:74] duration metric: took 181.135389ms to wait for pod list to return data ...
	I0421 18:44:32.147785   22327 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:44:32.334171   22327 request.go:629] Waited for 186.322052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:44:32.334219   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/default/serviceaccounts
	I0421 18:44:32.334225   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.334232   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.334236   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.338123   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:32.338272   22327 default_sa.go:45] found service account: "default"
	I0421 18:44:32.338290   22327 default_sa.go:55] duration metric: took 190.49626ms for default service account to be created ...
	I0421 18:44:32.338303   22327 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:44:32.534015   22327 request.go:629] Waited for 195.652986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.534114   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/namespaces/kube-system/pods
	I0421 18:44:32.534125   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.534132   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.534139   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.545543   22327 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 18:44:32.553872   22327 system_pods.go:86] 24 kube-system pods found
	I0421 18:44:32.553904   22327 system_pods.go:89] "coredns-7db6d8ff4d-n8sbt" [a6d836c4-74bf-4509-8ca9-8d0dea360fa2] Running
	I0421 18:44:32.553910   22327 system_pods.go:89] "coredns-7db6d8ff4d-zhskp" [ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f] Running
	I0421 18:44:32.553914   22327 system_pods.go:89] "etcd-ha-113226" [77a3acc6-2318-409e-b8f2-29564ddf2c30] Running
	I0421 18:44:32.553918   22327 system_pods.go:89] "etcd-ha-113226-m02" [2ab55383-b09f-450c-8ace-02efa905a3c0] Running
	I0421 18:44:32.553922   22327 system_pods.go:89] "etcd-ha-113226-m03" [1df4d990-651f-489d-851e-025124e70edb] Running
	I0421 18:44:32.553926   22327 system_pods.go:89] "kindnet-4hx6j" [8afde60f-5c30-40ea-910a-580ec96b30d2] Running
	I0421 18:44:32.553931   22327 system_pods.go:89] "kindnet-d7vgl" [d7958e8c-754e-4550-bb8f-25cf241d9179] Running
	I0421 18:44:32.553935   22327 system_pods.go:89] "kindnet-rhmbs" [fe360217-fab8-4a62-ba7a-5e50131dbe19] Running
	I0421 18:44:32.553940   22327 system_pods.go:89] "kube-apiserver-ha-113226" [61e211cc-3821-404b-8e0c-3adc3051a6ed] Running
	I0421 18:44:32.553945   22327 system_pods.go:89] "kube-apiserver-ha-113226-m02" [7454f77e-7d98-48ef-a6c7-e30f916d60bd] Running
	I0421 18:44:32.553950   22327 system_pods.go:89] "kube-apiserver-ha-113226-m03" [5150fa0a-f4d2-4b1f-98b7-c1df0368547f] Running
	I0421 18:44:32.553955   22327 system_pods.go:89] "kube-controller-manager-ha-113226" [5ac531ac-4402-42ac-8057-e1a4eb6660d2] Running
	I0421 18:44:32.553960   22327 system_pods.go:89] "kube-controller-manager-ha-113226-m02" [78ec9302-f953-4f7f-80a3-a49041b250fb] Running
	I0421 18:44:32.553964   22327 system_pods.go:89] "kube-controller-manager-ha-113226-m03" [5e23b988-465d-4ab7-9b63-b6b12797144f] Running
	I0421 18:44:32.553970   22327 system_pods.go:89] "kube-proxy-h75dp" [c365aaf4-b083-4247-acd0-cc753abc9f98] Running
	I0421 18:44:32.553974   22327 system_pods.go:89] "kube-proxy-nsv74" [a6cbc678-5c8f-41ef-90f4-d5339eb45d20] Running
	I0421 18:44:32.553979   22327 system_pods.go:89] "kube-proxy-shlwr" [67a1811b-054e-4f00-9360-2fbe114b4d62] Running
	I0421 18:44:32.553985   22327 system_pods.go:89] "kube-scheduler-ha-113226" [d4db4761-f0dc-4c87-9a2a-a56a54d6c30b] Running
	I0421 18:44:32.553989   22327 system_pods.go:89] "kube-scheduler-ha-113226-m02" [b8ece44b-9129-469e-a5c5-58fb410a899b] Running
	I0421 18:44:32.553992   22327 system_pods.go:89] "kube-scheduler-ha-113226-m03" [7b3d0da2-eec6-48c5-bd3b-76032498004a] Running
	I0421 18:44:32.553999   22327 system_pods.go:89] "kube-vip-ha-113226" [a290fa40-f3a8-4995-87e6-00ae61ba51b5] Running
	I0421 18:44:32.554002   22327 system_pods.go:89] "kube-vip-ha-113226-m02" [6ae2199b-859b-4ae3-bb04-cf7aad69b74e] Running
	I0421 18:44:32.554008   22327 system_pods.go:89] "kube-vip-ha-113226-m03" [6a55b958-1d3d-49a8-9ea2-3857e4e537a7] Running
	I0421 18:44:32.554012   22327 system_pods.go:89] "storage-provisioner" [aa37bc69-20f7-416c-9cb7-56430aed3215] Running
	I0421 18:44:32.554021   22327 system_pods.go:126] duration metric: took 215.711595ms to wait for k8s-apps to be running ...
	I0421 18:44:32.554029   22327 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:44:32.554090   22327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:44:32.572228   22327 system_svc.go:56] duration metric: took 18.18626ms WaitForService to wait for kubelet
	I0421 18:44:32.572265   22327 kubeadm.go:576] duration metric: took 20.51800361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:44:32.572284   22327 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:44:32.733670   22327 request.go:629] Waited for 161.30523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.60:8443/api/v1/nodes
	I0421 18:44:32.733757   22327 round_trippers.go:463] GET https://192.168.39.60:8443/api/v1/nodes
	I0421 18:44:32.733770   22327 round_trippers.go:469] Request Headers:
	I0421 18:44:32.733781   22327 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:44:32.733789   22327 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0421 18:44:32.737464   22327 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:44:32.738783   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:44:32.738804   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:44:32.738816   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:44:32.738822   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:44:32.738828   22327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:44:32.738833   22327 node_conditions.go:123] node cpu capacity is 2
	I0421 18:44:32.738839   22327 node_conditions.go:105] duration metric: took 166.550488ms to run NodePressure ...
	I0421 18:44:32.738858   22327 start.go:240] waiting for startup goroutines ...
	I0421 18:44:32.738887   22327 start.go:254] writing updated cluster config ...
	I0421 18:44:32.739166   22327 ssh_runner.go:195] Run: rm -f paused
	I0421 18:44:32.788238   22327 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 18:44:32.790493   22327 out.go:177] * Done! kubectl is now configured to use "ha-113226" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.273853902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725350273829695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3fa54adc-ed7d-45ad-a6b6-82d0f16d0443 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.274892263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9280d465-4404-4d6e-9dc6-aa8bf23643af name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.274977206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9280d465-4404-4d6e-9dc6-aa8bf23643af name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.275493492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9280d465-4404-4d6e-9dc6-aa8bf23643af name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.323903667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b908c8a-1188-41ab-b02d-c009bddb65a4 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.324020265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b908c8a-1188-41ab-b02d-c009bddb65a4 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.327118367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7965566c-b0b2-4f2b-bce7-7cf74f53c2fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.327629260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725350327582952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7965566c-b0b2-4f2b-bce7-7cf74f53c2fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.328254460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0d1b851-8a52-42ef-8ef3-d7f719ec3412 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.328351264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0d1b851-8a52-42ef-8ef3-d7f719ec3412 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.328646608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0d1b851-8a52-42ef-8ef3-d7f719ec3412 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.381721955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b15c42c4-1df5-4223-a0c5-79999835641c name=/runtime.v1.RuntimeService/Version
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.381829557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b15c42c4-1df5-4223-a0c5-79999835641c name=/runtime.v1.RuntimeService/Version
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.383358433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=978783b7-dceb-4591-8f45-63d4489565ef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.383791706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725350383763221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=978783b7-dceb-4591-8f45-63d4489565ef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.384574698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=754fcdbb-35b2-48bf-827a-8c3f7de24737 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.384672876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=754fcdbb-35b2-48bf-827a-8c3f7de24737 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.385000356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=754fcdbb-35b2-48bf-827a-8c3f7de24737 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.430550389Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c0df916-71a9-48a0-87cb-dfba79e4b84d name=/runtime.v1.RuntimeService/Version
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.430637004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c0df916-71a9-48a0-87cb-dfba79e4b84d name=/runtime.v1.RuntimeService/Version
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.432027047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7929a293-8cb7-4fbd-9cb9-da326d2704c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.432560956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725350432530388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7929a293-8cb7-4fbd-9cb9-da326d2704c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.433126211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baa30933-5001-4c95-8e09-cc020ee633bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.433254511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baa30933-5001-4c95-8e09-cc020ee633bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:49:10 ha-113226 crio[683]: time="2024-04-21 18:49:10.433494046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725076897279476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a81ee93000c49e5ad09bed8f20e53013c435083ada3a21ffce00040d86ab644,PodSandboxId:34fd27c2e48815fa276c996140d9e16025551998c8f045020d550be44d9517ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713724869517592026,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724869256684907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713724868859513690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74b
f-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e,PodSandboxId:3182dd9f53b28ae36966d1e541eba371205d4b4fc916d2c521b072a041b52432,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171372486
6720916319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713724866702400755,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f,PodSandboxId:2fdc0249766bb0d841dbe743ccc80132a425a55971d9ee71b48700b3191019c5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713724849861126713,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211eba53a0cc2aae45a8ede40475aeaf,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713724846631392571,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639,PodSandboxId:054e2ef640e4725b1b9ca12e9618846a1633242b6651474ab4c7260d022334f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713724846655558654,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713724846640034316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01,PodSandboxId:820c1a658f913f729ba93798b6be7dc67ad5ba9d05ec9382f304ea408aafcd7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713724846648374089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baa30933-5001-4c95-8e09-cc020ee633bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f640c1c70ad       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   faa43bf489bc5       busybox-fc5497c4f-vvhg8
	7a81ee93000c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   34fd27c2e4881       storage-provisioner
	3e93f6b05d337       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   65ac1d3e43166       coredns-7db6d8ff4d-zhskp
	0b5d0ab414db7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   2607d8484c47e       coredns-7db6d8ff4d-n8sbt
	52318879bf160       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      8 minutes ago       Running             kindnet-cni               0                   3182dd9f53b28       kindnet-d7vgl
	7048fade386a1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      8 minutes ago       Running             kube-proxy                0                   68e3a1db8a00b       kube-proxy-h75dp
	a95e4d8a09dd5       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     8 minutes ago       Running             kube-vip                  0                   2fdc0249766bb       kube-vip-ha-113226
	6ebd07febd8dc       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago       Running             kube-controller-manager   0                   054e2ef640e47       kube-controller-manager-ha-113226
	51aef14398913       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago       Running             kube-apiserver            0                   820c1a658f913       kube-apiserver-ha-113226
	9224faad5a972       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   6167071453e71       etcd-ha-113226
	e5498303bb3f9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago       Running             kube-scheduler            0                   adb821c8b93f8       kube-scheduler-ha-113226
	
	
	==> coredns [0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3] <==
	[INFO] 10.244.2.2:36518 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001806972s
	[INFO] 10.244.0.4:38372 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001940066s
	[INFO] 10.244.1.2:42730 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000328172s
	[INFO] 10.244.1.2:47312 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173713s
	[INFO] 10.244.2.2:36986 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108899s
	[INFO] 10.244.2.2:36822 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003347s
	[INFO] 10.244.2.2:41452 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195319s
	[INFO] 10.244.2.2:60508 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074721s
	[INFO] 10.244.0.4:51454 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104908s
	[INFO] 10.244.0.4:57376 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109603s
	[INFO] 10.244.0.4:40827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078308s
	[INFO] 10.244.0.4:47256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153662s
	[INFO] 10.244.0.4:37424 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014403s
	[INFO] 10.244.0.4:57234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144257s
	[INFO] 10.244.1.2:51901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259177s
	[INFO] 10.244.1.2:44450 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123202s
	[INFO] 10.244.2.2:53556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169239s
	[INFO] 10.244.2.2:42828 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117966s
	[INFO] 10.244.2.2:51827 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137514s
	[INFO] 10.244.0.4:56918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047175s
	[INFO] 10.244.1.2:45608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118838s
	[INFO] 10.244.1.2:50713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284967s
	[INFO] 10.244.2.2:58426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000313356s
	[INFO] 10.244.2.2:39340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130525s
	[INFO] 10.244.0.4:58687 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094588s
	
	
	==> coredns [3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f] <==
	[INFO] 10.244.1.2:44904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149133s
	[INFO] 10.244.1.2:43332 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.07284561s
	[INFO] 10.244.1.2:42838 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013593215s
	[INFO] 10.244.1.2:60318 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215954s
	[INFO] 10.244.1.2:46296 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158635s
	[INFO] 10.244.1.2:41498 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146514s
	[INFO] 10.244.2.2:54180 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001245595s
	[INFO] 10.244.2.2:56702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118055s
	[INFO] 10.244.2.2:52049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103643s
	[INFO] 10.244.2.2:39892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013318s
	[INFO] 10.244.0.4:50393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617766s
	[INFO] 10.244.0.4:58125 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001163449s
	[INFO] 10.244.1.2:55583 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000370228s
	[INFO] 10.244.1.2:57237 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092539s
	[INFO] 10.244.2.2:42488 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129888s
	[INFO] 10.244.0.4:48460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104891s
	[INFO] 10.244.0.4:35562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112767s
	[INFO] 10.244.0.4:37396 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009448s
	[INFO] 10.244.1.2:40110 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268116s
	[INFO] 10.244.1.2:40165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166492s
	[INFO] 10.244.2.2:45365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014902s
	[INFO] 10.244.2.2:48282 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093124s
	[INFO] 10.244.0.4:43339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000357932s
	[INFO] 10.244.0.4:39537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086381s
	[INFO] 10.244.0.4:33649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093318s
	
	
	==> describe nodes <==
	Name:               ha-113226
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_40_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:40:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:49:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:45:00 +0000   Sun, 21 Apr 2024 18:41:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-113226
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f770328f068141e091b6c3dbf4a76488
	  System UUID:                f770328f-0681-41e0-91b6-c3dbf4a76488
	  Boot ID:                    bbf1e5be-35e8-4986-b694-bc173cac60e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vvhg8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 coredns-7db6d8ff4d-n8sbt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m4s
	  kube-system                 coredns-7db6d8ff4d-zhskp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m4s
	  kube-system                 etcd-ha-113226                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-d7vgl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m5s
	  kube-system                 kube-apiserver-ha-113226             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-controller-manager-ha-113226    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-proxy-h75dp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-scheduler-ha-113226             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-vip-ha-113226                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m3s   kube-proxy       
	  Normal  Starting                 8m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m15s  kubelet          Node ha-113226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s  kubelet          Node ha-113226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s  kubelet          Node ha-113226 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m5s   node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal  NodeReady                8m2s   kubelet          Node ha-113226 status is now: NodeReady
	  Normal  RegisteredNode           5m56s  node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal  RegisteredNode           4m44s  node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	
	
	Name:               ha-113226-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_42_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:42:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:45:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Apr 2024 18:44:58 +0000   Sun, 21 Apr 2024 18:46:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    ha-113226-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e96ca06000049ab994a1d4c31482f88
	  System UUID:                8e96ca06-0000-49ab-994a-1d4c31482f88
	  Boot ID:                    2000e4cc-71bf-4b10-8615-26011164ba86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-djlm5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-113226-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m13s
	  kube-system                 kindnet-4hx6j                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m14s
	  kube-system                 kube-apiserver-ha-113226-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-controller-manager-ha-113226-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-proxy-nsv74                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-scheduler-ha-113226-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-vip-ha-113226-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s (x8 over 6m15s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s (x8 over 6m15s)  kubelet          Node ha-113226-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s (x7 over 6m15s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  NodeNotReady             2m50s                  node-controller  Node ha-113226-m02 status is now: NodeNotReady
	
	
	Name:               ha-113226-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_44_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:44:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:49:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:44:39 +0000   Sun, 21 Apr 2024 18:44:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ha-113226-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e527acdd3b3544d5b53bced4a1abdb9a
	  System UUID:                e527acdd-3b35-44d5-b53b-ced4a1abdb9a
	  Boot ID:                    e881e85f-0867-4709-bc9b-ff693580d870
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lccdt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-113226-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m
	  kube-system                 kindnet-rhmbs                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m2s
	  kube-system                 kube-apiserver-ha-113226-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-ha-113226-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-shlwr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-ha-113226-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-vip-ha-113226-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node ha-113226-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal  RegisteredNode           5m                   node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	
	
	Name:               ha-113226-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_45_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:45:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:49:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:45:44 +0000   Sun, 21 Apr 2024 18:45:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-113226-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1d55ce55d9e44738a42ed29cc9f1198
	  System UUID:                c1d55ce5-5d9e-4473-8a42-ed29cc9f1198
	  Boot ID:                    c7e0935e-75ea-414d-a3d7-b181d3048bca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jkd2l       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-proxy-6s6v7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m58s (x2 over 3m58s)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x2 over 3m58s)  kubelet          Node ha-113226-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x2 over 3m58s)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal  NodeReady                3m47s                  kubelet          Node ha-113226-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr21 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053196] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042774] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.623428] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.542269] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.723086] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.114701] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.062085] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054859] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.200473] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.119933] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.314231] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.898172] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.066212] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.334925] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +1.112693] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.070346] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.082112] kauditd_printk_skb: 40 callbacks suppressed
	[Apr21 18:41] kauditd_printk_skb: 21 callbacks suppressed
	[Apr21 18:43] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c] <==
	{"level":"warn","ts":"2024-04-21T18:49:10.559589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.659417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.746925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.757048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.758702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.762122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.781714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.790048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.798125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.803043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.806926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.815822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.824733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.833693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.840519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.845447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.857565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.860277Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.863895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.86986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.873471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.876938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.88272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.892403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:49:10.900041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:49:10 up 8 min,  0 users,  load average: 0.50, 0.58, 0.31
	Linux ha-113226 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52318879bf16002641f806f8e50dcec905a323807458991907dbe9982e093e8e] <==
	I0421 18:48:38.661412       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:48:48.669566       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:48:48.669616       1 main.go:227] handling current node
	I0421 18:48:48.669628       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:48:48.669634       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:48:48.669745       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:48:48.669780       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:48:48.669832       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:48:48.669866       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:48:58.691963       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:48:58.692054       1 main.go:227] handling current node
	I0421 18:48:58.692076       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:48:58.692085       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:48:58.692381       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:48:58.692425       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:48:58.692513       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:48:58.692554       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:49:08.796666       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:49:08.796864       1 main.go:227] handling current node
	I0421 18:49:08.796920       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:49:08.796947       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:49:08.797269       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:49:08.797331       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:49:08.797453       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:49:08.797491       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01] <==
	Trace[1105283356]: ["GuaranteedUpdate etcd3" audit-id:4c61baa6-37ba-4c82-8451-55676d7fcd54,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 639ms (18:45:12.853)
	Trace[1105283356]:  ---"Txn call completed" 638ms (18:45:13.492)]
	Trace[1105283356]: [639.485202ms] [639.485202ms] END
	I0421 18:45:13.495373       1 trace.go:236] Trace[462650061]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:bfc4de9f-cf2f-4a10-94a4-f663fdd11177,client:127.0.0.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:limitranges,scope:namespace,url:/api/v1/namespaces/kube-system/limitranges,user-agent:kube-apiserver/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:LIST (21-Apr-2024 18:45:12.847) (total time: 647ms):
	Trace[462650061]: ["List(recursive=true) etcd3" audit-id:bfc4de9f-cf2f-4a10-94a4-f663fdd11177,key:/limitranges/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 647ms (18:45:12.847)]
	Trace[462650061]: [647.968492ms] [647.968492ms] END
	I0421 18:45:13.508688       1 trace.go:236] Trace[558803442]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:345e531a-cef1-4b11-be90-d2c06c0142b4,client:192.168.39.60,api-group:,api-version:v1,name:ha-113226-m04,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-113226-m04,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:node-controller,verb:PATCH (21-Apr-2024 18:45:12.839) (total time: 669ms):
	Trace[558803442]: ["GuaranteedUpdate etcd3" audit-id:345e531a-cef1-4b11-be90-d2c06c0142b4,key:/minions/ha-113226-m04,type:*core.Node,resource:nodes 666ms (18:45:12.841)
	Trace[558803442]:  ---"Txn call completed" 645ms (18:45:13.488)]
	Trace[558803442]: ---"About to apply patch" 645ms (18:45:13.488)
	Trace[558803442]: ---"Object stored in database" 17ms (18:45:13.508)
	Trace[558803442]: [669.383879ms] [669.383879ms] END
	I0421 18:45:13.523111       1 trace.go:236] Trace[4398074]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:594f09d9-5b61-46a4-bb86-e814284267d3,client:192.168.39.60,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (21-Apr-2024 18:45:12.846) (total time: 676ms):
	Trace[4398074]: [676.90749ms] [676.90749ms] END
	I0421 18:45:13.526820       1 trace.go:236] Trace[1728014601]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:5402b02f-96dd-4b74-8ace-52598b8b3784,client:192.168.39.60,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (21-Apr-2024 18:45:12.845) (total time: 681ms):
	Trace[1728014601]: [681.264916ms] [681.264916ms] END
	I0421 18:45:13.556389       1 trace.go:236] Trace[1542993190]: "Patch" accept:application/json, */*,audit-id:a340bfba-ec53-4267-9175-bedbdec833fe,client:192.168.39.20,api-group:,api-version:v1,name:ha-113226-m04,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-113226-m04,user-agent:kubeadm/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (21-Apr-2024 18:45:12.843) (total time: 713ms):
	Trace[1542993190]: ["GuaranteedUpdate etcd3" audit-id:a340bfba-ec53-4267-9175-bedbdec833fe,key:/minions/ha-113226-m04,type:*core.Node,resource:nodes 713ms (18:45:12.843)
	Trace[1542993190]:  ---"Txn call completed" 650ms (18:45:13.495)
	Trace[1542993190]:  ---"Txn call completed" 33ms (18:45:13.555)]
	Trace[1542993190]: ---"About to apply patch" 651ms (18:45:13.495)
	Trace[1542993190]: ---"About to apply patch" 21ms (18:45:13.519)
	Trace[1542993190]: ---"Object stored in database" 34ms (18:45:13.556)
	Trace[1542993190]: [713.190152ms] [713.190152ms] END
	W0421 18:46:01.893487       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.221 192.168.39.60]
	
	
	==> kube-controller-manager [6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639] <==
	I0421 18:44:34.208158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="249.253854ms"
	E0421 18:44:34.208434       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0421 18:44:34.233985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.385354ms"
	I0421 18:44:34.234126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.792µs"
	I0421 18:44:34.382775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.057398ms"
	I0421 18:44:34.382917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.789µs"
	I0421 18:44:35.900012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.407µs"
	I0421 18:44:35.973422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.604µs"
	I0421 18:44:37.417544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.16384ms"
	I0421 18:44:37.417720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.886µs"
	I0421 18:44:37.452430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.386746ms"
	I0421 18:44:37.452751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.854µs"
	I0421 18:44:37.495479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.92056ms"
	I0421 18:44:37.495642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.44µs"
	I0421 18:44:38.015065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.659092ms"
	I0421 18:44:38.015306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.071µs"
	E0421 18:45:12.381104       1 certificate_controller.go:146] Sync csr-chwql failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-chwql": the object has been modified; please apply your changes to the latest version and try again
	E0421 18:45:12.396861       1 certificate_controller.go:146] Sync csr-chwql failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-chwql": the object has been modified; please apply your changes to the latest version and try again
	I0421 18:45:12.833534       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-113226-m04\" does not exist"
	I0421 18:45:13.510552       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-113226-m04" podCIDRs=["10.244.3.0/24"]
	I0421 18:45:15.313582       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-113226-m04"
	I0421 18:45:23.057743       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-113226-m04"
	I0421 18:46:20.339550       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-113226-m04"
	I0421 18:46:20.525993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.463367ms"
	I0421 18:46:20.526510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.029µs"
	
	
	==> kube-proxy [7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3] <==
	I0421 18:41:07.148284       1 server_linux.go:69] "Using iptables proxy"
	I0421 18:41:07.174666       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.60"]
	I0421 18:41:07.287305       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:41:07.287394       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:41:07.287423       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:41:07.290726       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:41:07.291117       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:41:07.291156       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:41:07.292264       1 config.go:192] "Starting service config controller"
	I0421 18:41:07.292303       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:41:07.292327       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:41:07.292330       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:41:07.295118       1 config.go:319] "Starting node config controller"
	I0421 18:41:07.295236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 18:41:07.392692       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 18:41:07.392777       1 shared_informer.go:320] Caches are synced for service config
	I0421 18:41:07.396378       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab] <==
	E0421 18:44:08.322751       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rhmbs\": pod kindnet-rhmbs is already assigned to node \"ha-113226-m03\"" pod="kube-system/kindnet-rhmbs"
	I0421 18:44:08.322807       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rhmbs" node="ha-113226-m03"
	E0421 18:44:08.409086       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5mwwd\": pod kube-proxy-5mwwd is already assigned to node \"ha-113226-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5mwwd" node="ha-113226-m03"
	E0421 18:44:08.409238       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 00a647d3-d960-4114-866a-cdf4a6902acd(kube-system/kube-proxy-5mwwd) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5mwwd"
	E0421 18:44:08.409346       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5mwwd\": pod kube-proxy-5mwwd is already assigned to node \"ha-113226-m03\"" pod="kube-system/kube-proxy-5mwwd"
	I0421 18:44:08.410620       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5mwwd" node="ha-113226-m03"
	E0421 18:44:08.419831       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-spcr9\": pod kindnet-spcr9 is already assigned to node \"ha-113226-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-spcr9" node="ha-113226-m03"
	E0421 18:44:08.419899       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 42f59e78-d5eb-4b88-8160-b6a5248be0f5(kube-system/kindnet-spcr9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-spcr9"
	E0421 18:44:08.419915       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-spcr9\": pod kindnet-spcr9 is already assigned to node \"ha-113226-m03\"" pod="kube-system/kindnet-spcr9"
	I0421 18:44:08.419929       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-spcr9" node="ha-113226-m03"
	E0421 18:44:33.679054       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-djlm5\": pod busybox-fc5497c4f-djlm5 is already assigned to node \"ha-113226-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-djlm5" node="ha-113226-m02"
	E0421 18:44:33.679148       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4dbff1e7-4533-4189-8b00-098307a11d0b(default/busybox-fc5497c4f-djlm5) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-djlm5"
	E0421 18:44:33.679256       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-djlm5\": pod busybox-fc5497c4f-djlm5 is already assigned to node \"ha-113226-m02\"" pod="default/busybox-fc5497c4f-djlm5"
	I0421 18:44:33.679280       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-djlm5" node="ha-113226-m02"
	E0421 18:45:13.577372       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6s6v7\": pod kube-proxy-6s6v7 is already assigned to node \"ha-113226-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6s6v7" node="ha-113226-m04"
	E0421 18:45:13.577702       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5e72592e-0d66-4c92-982d-53f1d5a19c87(kube-system/kube-proxy-6s6v7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6s6v7"
	E0421 18:45:13.579518       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6s6v7\": pod kube-proxy-6s6v7 is already assigned to node \"ha-113226-m04\"" pod="kube-system/kube-proxy-6s6v7"
	I0421 18:45:13.579622       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6s6v7" node="ha-113226-m04"
	E0421 18:45:13.635627       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mvlpk\": pod kindnet-mvlpk is already assigned to node \"ha-113226-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mvlpk" node="ha-113226-m04"
	E0421 18:45:13.635736       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mvlpk\": pod kindnet-mvlpk is already assigned to node \"ha-113226-m04\"" pod="kube-system/kindnet-mvlpk"
	I0421 18:45:13.635762       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mvlpk" node="ha-113226-m04"
	E0421 18:45:13.791313       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jtqnc\": pod kindnet-jtqnc is already assigned to node \"ha-113226-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jtqnc" node="ha-113226-m04"
	E0421 18:45:13.791389       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7099f771-deb3-4c65-bd3f-d8a91874d516(kube-system/kindnet-jtqnc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jtqnc"
	E0421 18:45:13.791405       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jtqnc\": pod kindnet-jtqnc is already assigned to node \"ha-113226-m04\"" pod="kube-system/kindnet-jtqnc"
	I0421 18:45:13.791423       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jtqnc" node="ha-113226-m04"
	
	
	==> kubelet <==
	Apr 21 18:44:55 ha-113226 kubelet[1377]: E0421 18:44:55.931737    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:44:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:44:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:44:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:44:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:45:55 ha-113226 kubelet[1377]: E0421 18:45:55.930012    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:45:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:45:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:45:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:45:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:46:55 ha-113226 kubelet[1377]: E0421 18:46:55.928551    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:46:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:46:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:46:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:46:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:47:55 ha-113226 kubelet[1377]: E0421 18:47:55.926967    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:47:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:47:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:47:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:47:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:48:55 ha-113226 kubelet[1377]: E0421 18:48:55.928469    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:48:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:48:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:48:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:48:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-113226 -n ha-113226
helpers_test.go:261: (dbg) Run:  kubectl --context ha-113226 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (62.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-113226 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-113226 -v=7 --alsologtostderr
E0421 18:49:33.889394   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:51:09.207997   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-113226 -v=7 --alsologtostderr: exit status 82 (2m2.721542657s)

                                                
                                                
-- stdout --
	* Stopping node "ha-113226-m04"  ...
	* Stopping node "ha-113226-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:49:12.490115   28334 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:49:12.490252   28334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:49:12.490264   28334 out.go:304] Setting ErrFile to fd 2...
	I0421 18:49:12.490271   28334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:49:12.490467   28334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:49:12.490690   28334 out.go:298] Setting JSON to false
	I0421 18:49:12.490768   28334 mustload.go:65] Loading cluster: ha-113226
	I0421 18:49:12.491168   28334 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:49:12.491272   28334 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:49:12.491447   28334 mustload.go:65] Loading cluster: ha-113226
	I0421 18:49:12.491577   28334 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:49:12.491604   28334 stop.go:39] StopHost: ha-113226-m04
	I0421 18:49:12.491955   28334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:12.491998   28334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:12.508021   28334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0421 18:49:12.508463   28334 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:12.509019   28334 main.go:141] libmachine: Using API Version  1
	I0421 18:49:12.509038   28334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:12.509369   28334 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:12.512603   28334 out.go:177] * Stopping node "ha-113226-m04"  ...
	I0421 18:49:12.513772   28334 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0421 18:49:12.513793   28334 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:49:12.513976   28334 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0421 18:49:12.514008   28334 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:49:12.516667   28334 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:49:12.517081   28334 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:44:58 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:49:12.517117   28334 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:49:12.517230   28334 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:49:12.517366   28334 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:49:12.517503   28334 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:49:12.517607   28334 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:49:12.605288   28334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0421 18:49:12.663949   28334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0421 18:49:12.718913   28334 main.go:141] libmachine: Stopping "ha-113226-m04"...
	I0421 18:49:12.718938   28334 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:49:12.720538   28334 main.go:141] libmachine: (ha-113226-m04) Calling .Stop
	I0421 18:49:12.724384   28334 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 0/120
	I0421 18:49:13.725878   28334 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 1/120
	I0421 18:49:14.727735   28334 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:49:14.728938   28334 main.go:141] libmachine: Machine "ha-113226-m04" was stopped.
	I0421 18:49:14.728954   28334 stop.go:75] duration metric: took 2.215182053s to stop
	I0421 18:49:14.728982   28334 stop.go:39] StopHost: ha-113226-m03
	I0421 18:49:14.729284   28334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:49:14.729333   28334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:49:14.744074   28334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0421 18:49:14.744525   28334 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:49:14.745002   28334 main.go:141] libmachine: Using API Version  1
	I0421 18:49:14.745027   28334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:49:14.745340   28334 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:49:14.747061   28334 out.go:177] * Stopping node "ha-113226-m03"  ...
	I0421 18:49:14.748181   28334 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0421 18:49:14.748201   28334 main.go:141] libmachine: (ha-113226-m03) Calling .DriverName
	I0421 18:49:14.748424   28334 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0421 18:49:14.748458   28334 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHHostname
	I0421 18:49:14.751117   28334 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:49:14.751529   28334 main.go:141] libmachine: (ha-113226-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:32:68", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:43:34 +0000 UTC Type:0 Mac:52:54:00:f7:32:68 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-113226-m03 Clientid:01:52:54:00:f7:32:68}
	I0421 18:49:14.751553   28334 main.go:141] libmachine: (ha-113226-m03) DBG | domain ha-113226-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:f7:32:68 in network mk-ha-113226
	I0421 18:49:14.751676   28334 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHPort
	I0421 18:49:14.751850   28334 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHKeyPath
	I0421 18:49:14.752014   28334 main.go:141] libmachine: (ha-113226-m03) Calling .GetSSHUsername
	I0421 18:49:14.752141   28334 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m03/id_rsa Username:docker}
	I0421 18:49:14.843670   28334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0421 18:49:14.899878   28334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0421 18:49:14.956145   28334 main.go:141] libmachine: Stopping "ha-113226-m03"...
	I0421 18:49:14.956173   28334 main.go:141] libmachine: (ha-113226-m03) Calling .GetState
	I0421 18:49:14.957671   28334 main.go:141] libmachine: (ha-113226-m03) Calling .Stop
	I0421 18:49:14.961673   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 0/120
	I0421 18:49:15.963184   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 1/120
	I0421 18:49:16.964724   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 2/120
	I0421 18:49:17.966035   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 3/120
	I0421 18:49:18.967581   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 4/120
	I0421 18:49:19.969897   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 5/120
	I0421 18:49:20.971481   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 6/120
	I0421 18:49:21.973214   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 7/120
	I0421 18:49:22.975745   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 8/120
	I0421 18:49:23.977189   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 9/120
	I0421 18:49:24.979257   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 10/120
	I0421 18:49:25.981140   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 11/120
	I0421 18:49:26.982648   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 12/120
	I0421 18:49:27.984866   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 13/120
	I0421 18:49:28.986327   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 14/120
	I0421 18:49:29.987917   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 15/120
	I0421 18:49:30.989437   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 16/120
	I0421 18:49:31.990938   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 17/120
	I0421 18:49:32.993238   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 18/120
	I0421 18:49:33.995224   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 19/120
	I0421 18:49:34.996966   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 20/120
	I0421 18:49:35.998581   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 21/120
	I0421 18:49:36.999928   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 22/120
	I0421 18:49:38.001517   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 23/120
	I0421 18:49:39.003620   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 24/120
	I0421 18:49:40.005398   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 25/120
	I0421 18:49:41.007046   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 26/120
	I0421 18:49:42.008459   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 27/120
	I0421 18:49:43.010557   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 28/120
	I0421 18:49:44.011946   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 29/120
	I0421 18:49:45.013882   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 30/120
	I0421 18:49:46.015367   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 31/120
	I0421 18:49:47.016824   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 32/120
	I0421 18:49:48.018250   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 33/120
	I0421 18:49:49.019961   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 34/120
	I0421 18:49:50.021614   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 35/120
	I0421 18:49:51.022806   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 36/120
	I0421 18:49:52.024019   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 37/120
	I0421 18:49:53.025360   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 38/120
	I0421 18:49:54.026533   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 39/120
	I0421 18:49:55.028273   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 40/120
	I0421 18:49:56.029660   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 41/120
	I0421 18:49:57.030941   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 42/120
	I0421 18:49:58.032235   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 43/120
	I0421 18:49:59.033577   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 44/120
	I0421 18:50:00.035231   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 45/120
	I0421 18:50:01.037213   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 46/120
	I0421 18:50:02.038691   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 47/120
	I0421 18:50:03.040401   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 48/120
	I0421 18:50:04.041626   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 49/120
	I0421 18:50:05.043374   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 50/120
	I0421 18:50:06.044801   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 51/120
	I0421 18:50:07.046120   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 52/120
	I0421 18:50:08.047410   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 53/120
	I0421 18:50:09.049173   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 54/120
	I0421 18:50:10.050925   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 55/120
	I0421 18:50:11.052226   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 56/120
	I0421 18:50:12.053545   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 57/120
	I0421 18:50:13.055093   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 58/120
	I0421 18:50:14.056422   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 59/120
	I0421 18:50:15.058304   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 60/120
	I0421 18:50:16.059829   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 61/120
	I0421 18:50:17.061046   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 62/120
	I0421 18:50:18.062281   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 63/120
	I0421 18:50:19.063534   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 64/120
	I0421 18:50:20.065707   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 65/120
	I0421 18:50:21.067011   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 66/120
	I0421 18:50:22.069100   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 67/120
	I0421 18:50:23.070437   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 68/120
	I0421 18:50:24.071732   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 69/120
	I0421 18:50:25.073663   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 70/120
	I0421 18:50:26.075120   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 71/120
	I0421 18:50:27.076659   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 72/120
	I0421 18:50:28.078048   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 73/120
	I0421 18:50:29.079601   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 74/120
	I0421 18:50:30.081407   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 75/120
	I0421 18:50:31.082752   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 76/120
	I0421 18:50:32.084148   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 77/120
	I0421 18:50:33.085554   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 78/120
	I0421 18:50:34.086935   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 79/120
	I0421 18:50:35.089115   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 80/120
	I0421 18:50:36.090335   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 81/120
	I0421 18:50:37.091634   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 82/120
	I0421 18:50:38.092870   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 83/120
	I0421 18:50:39.094272   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 84/120
	I0421 18:50:40.095527   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 85/120
	I0421 18:50:41.097302   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 86/120
	I0421 18:50:42.098815   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 87/120
	I0421 18:50:43.100312   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 88/120
	I0421 18:50:44.101650   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 89/120
	I0421 18:50:45.103481   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 90/120
	I0421 18:50:46.105190   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 91/120
	I0421 18:50:47.106459   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 92/120
	I0421 18:50:48.107605   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 93/120
	I0421 18:50:49.108838   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 94/120
	I0421 18:50:50.110464   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 95/120
	I0421 18:50:51.111910   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 96/120
	I0421 18:50:52.113039   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 97/120
	I0421 18:50:53.114518   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 98/120
	I0421 18:50:54.115759   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 99/120
	I0421 18:50:55.117284   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 100/120
	I0421 18:50:56.118627   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 101/120
	I0421 18:50:57.120656   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 102/120
	I0421 18:50:58.122967   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 103/120
	I0421 18:50:59.124316   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 104/120
	I0421 18:51:00.125643   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 105/120
	I0421 18:51:01.128071   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 106/120
	I0421 18:51:02.129415   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 107/120
	I0421 18:51:03.130746   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 108/120
	I0421 18:51:04.132383   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 109/120
	I0421 18:51:05.134013   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 110/120
	I0421 18:51:06.135412   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 111/120
	I0421 18:51:07.136737   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 112/120
	I0421 18:51:08.138262   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 113/120
	I0421 18:51:09.139582   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 114/120
	I0421 18:51:10.141230   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 115/120
	I0421 18:51:11.142768   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 116/120
	I0421 18:51:12.144164   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 117/120
	I0421 18:51:13.145579   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 118/120
	I0421 18:51:14.146896   28334 main.go:141] libmachine: (ha-113226-m03) Waiting for machine to stop 119/120
	I0421 18:51:15.147672   28334 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0421 18:51:15.147729   28334 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0421 18:51:15.149771   28334 out.go:177] 
	W0421 18:51:15.151347   28334 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0421 18:51:15.151371   28334 out.go:239] * 
	* 
	W0421 18:51:15.154166   28334 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 18:51:15.155820   28334 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-113226 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-113226 --wait=true -v=7 --alsologtostderr
E0421 18:52:32.255318   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:54:06.205142   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-113226 --wait=true -v=7 --alsologtostderr: (4m19.494023395s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-113226
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-113226 -n ha-113226
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-113226 logs -n 25: (2.102718102s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m02:/home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m04 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp testdata/cp-test.txt                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226:/home/docker/cp-test_ha-113226-m04_ha-113226.txt                       |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226 sudo cat                                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226.txt                                 |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m02:/home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03:/home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m03 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-113226 node stop m02 -v=7                                                     | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-113226 node start m02 -v=7                                                    | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-113226 -v=7                                                           | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-113226 -v=7                                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-113226 --wait=true -v=7                                                    | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:51 UTC | 21 Apr 24 18:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-113226                                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:55 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:51:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:51:15.210483   28793 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:51:15.210608   28793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:51:15.210617   28793 out.go:304] Setting ErrFile to fd 2...
	I0421 18:51:15.210621   28793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:51:15.210825   28793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:51:15.211357   28793 out.go:298] Setting JSON to false
	I0421 18:51:15.212847   28793 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1973,"bootTime":1713723502,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:51:15.213145   28793 start.go:139] virtualization: kvm guest
	I0421 18:51:15.215341   28793 out.go:177] * [ha-113226] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:51:15.216599   28793 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:51:15.216622   28793 notify.go:220] Checking for updates...
	I0421 18:51:15.217828   28793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:51:15.219184   28793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:51:15.220509   28793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:51:15.221764   28793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:51:15.223076   28793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:51:15.224752   28793 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:51:15.224852   28793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:51:15.225293   28793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:51:15.225335   28793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:51:15.240365   28793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I0421 18:51:15.240835   28793 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:51:15.241385   28793 main.go:141] libmachine: Using API Version  1
	I0421 18:51:15.241410   28793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:51:15.241726   28793 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:51:15.241910   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:51:15.280558   28793 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 18:51:15.281869   28793 start.go:297] selected driver: kvm2
	I0421 18:51:15.281884   28793 start.go:901] validating driver "kvm2" against &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-11
3226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth
:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:51:15.282105   28793 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:51:15.282455   28793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:51:15.282530   28793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:51:15.299488   28793 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:51:15.300171   28793 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:51:15.300230   28793 cni.go:84] Creating CNI manager for ""
	I0421 18:51:15.300241   28793 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0421 18:51:15.300300   28793 start.go:340] cluster config:
	{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:51:15.300433   28793 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:51:15.302753   28793 out.go:177] * Starting "ha-113226" primary control-plane node in "ha-113226" cluster
	I0421 18:51:15.303855   28793 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:51:15.303895   28793 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:51:15.303917   28793 cache.go:56] Caching tarball of preloaded images
	I0421 18:51:15.304008   28793 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:51:15.304023   28793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:51:15.304225   28793 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:51:15.304475   28793 start.go:360] acquireMachinesLock for ha-113226: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:51:15.304542   28793 start.go:364] duration metric: took 40.286µs to acquireMachinesLock for "ha-113226"
	I0421 18:51:15.304562   28793 start.go:96] Skipping create...Using existing machine configuration
	I0421 18:51:15.304572   28793 fix.go:54] fixHost starting: 
	I0421 18:51:15.304876   28793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:51:15.304918   28793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:51:15.319327   28793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34983
	I0421 18:51:15.319691   28793 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:51:15.320155   28793 main.go:141] libmachine: Using API Version  1
	I0421 18:51:15.320178   28793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:51:15.320494   28793 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:51:15.320692   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:51:15.320826   28793 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:51:15.322375   28793 fix.go:112] recreateIfNeeded on ha-113226: state=Running err=<nil>
	W0421 18:51:15.322393   28793 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 18:51:15.324160   28793 out.go:177] * Updating the running kvm2 "ha-113226" VM ...
	I0421 18:51:15.325286   28793 machine.go:94] provisionDockerMachine start ...
	I0421 18:51:15.325303   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:51:15.325515   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.328112   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.328610   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.328641   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.328797   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.328954   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.329124   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.329241   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.329386   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.329619   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.329633   28793 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 18:51:15.443514   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226
	
	I0421 18:51:15.443537   28793 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:51:15.443777   28793 buildroot.go:166] provisioning hostname "ha-113226"
	I0421 18:51:15.443801   28793 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:51:15.444006   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.446790   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.447148   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.447176   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.447313   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.447495   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.447644   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.447785   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.447936   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.448115   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.448127   28793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226 && echo "ha-113226" | sudo tee /etc/hostname
	I0421 18:51:15.572465   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226
	
	I0421 18:51:15.572496   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.575227   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.575633   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.575664   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.575830   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.576037   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.576163   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.576282   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.576427   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.576585   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.576601   28793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:51:15.695540   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:51:15.695586   28793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:51:15.695620   28793 buildroot.go:174] setting up certificates
	I0421 18:51:15.695633   28793 provision.go:84] configureAuth start
	I0421 18:51:15.695641   28793 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:51:15.695876   28793 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:51:15.698449   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.698815   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.698840   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.698985   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.701045   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.701406   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.701436   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.701557   28793 provision.go:143] copyHostCerts
	I0421 18:51:15.701593   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:51:15.701625   28793 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:51:15.701650   28793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:51:15.701721   28793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:51:15.701789   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:51:15.701808   28793 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:51:15.701815   28793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:51:15.701837   28793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:51:15.701881   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:51:15.701925   28793 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:51:15.701934   28793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:51:15.701958   28793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:51:15.702001   28793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226 san=[127.0.0.1 192.168.39.60 ha-113226 localhost minikube]
	I0421 18:51:15.805246   28793 provision.go:177] copyRemoteCerts
	I0421 18:51:15.805299   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:51:15.805320   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.807862   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.808218   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.808248   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.808411   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.808591   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.808782   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.808912   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:51:15.894966   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:51:15.895032   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:51:15.930596   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:51:15.930665   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0421 18:51:15.960170   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:51:15.960234   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 18:51:15.987760   28793 provision.go:87] duration metric: took 292.11615ms to configureAuth
	I0421 18:51:15.987783   28793 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:51:15.987985   28793 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:51:15.988094   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.990566   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.990921   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.990947   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.991077   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.991267   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.991402   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.991564   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.991774   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.991923   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.991937   28793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:52:46.906791   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:52:46.906813   28793 machine.go:97] duration metric: took 1m31.581515123s to provisionDockerMachine
	I0421 18:52:46.906824   28793 start.go:293] postStartSetup for "ha-113226" (driver="kvm2")
	I0421 18:52:46.906834   28793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:52:46.906846   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:46.907222   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:52:46.907248   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:46.910424   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:46.910912   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:46.910940   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:46.911084   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:46.911273   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:46.911412   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:46.911561   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:52:46.993882   28793 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:52:46.998743   28793 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:52:46.998772   28793 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:52:46.998846   28793 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:52:46.998959   28793 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:52:46.998972   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:52:46.999051   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:52:47.009247   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:52:47.036322   28793 start.go:296] duration metric: took 129.488493ms for postStartSetup
	I0421 18:52:47.036355   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.036618   28793 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0421 18:52:47.036645   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.039253   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.039673   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.039694   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.039820   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.040004   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.040204   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.040352   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	W0421 18:52:47.120679   28793 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0421 18:52:47.120703   28793 fix.go:56] duration metric: took 1m31.816132887s for fixHost
	I0421 18:52:47.120722   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.123463   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.123808   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.123839   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.124057   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.124251   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.124422   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.124561   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.124734   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:52:47.124940   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:52:47.124951   28793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:52:47.230883   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713725567.180811629
	
	I0421 18:52:47.230909   28793 fix.go:216] guest clock: 1713725567.180811629
	I0421 18:52:47.230921   28793 fix.go:229] Guest: 2024-04-21 18:52:47.180811629 +0000 UTC Remote: 2024-04-21 18:52:47.120709476 +0000 UTC m=+91.954605676 (delta=60.102153ms)
	I0421 18:52:47.230976   28793 fix.go:200] guest clock delta is within tolerance: 60.102153ms
	I0421 18:52:47.230990   28793 start.go:83] releasing machines lock for "ha-113226", held for 1m31.926434241s
	I0421 18:52:47.231041   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.231324   28793 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:52:47.233757   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.234153   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.234176   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.234357   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.234875   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.235054   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.235162   28793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:52:47.235202   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.235299   28793 ssh_runner.go:195] Run: cat /version.json
	I0421 18:52:47.235324   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.237774   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.237838   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.238168   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.238195   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.238230   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.238248   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.238394   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.238508   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.238573   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.238639   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.238728   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.238788   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.238866   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:52:47.238969   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:52:47.316550   28793 ssh_runner.go:195] Run: systemctl --version
	I0421 18:52:47.341717   28793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:52:47.503572   28793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:52:47.512142   28793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:52:47.512194   28793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:52:47.522115   28793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0421 18:52:47.522135   28793 start.go:494] detecting cgroup driver to use...
	I0421 18:52:47.522187   28793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:52:47.539204   28793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:52:47.554220   28793 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:52:47.554262   28793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:52:47.568599   28793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:52:47.582768   28793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:52:47.739606   28793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:52:47.903652   28793 docker.go:233] disabling docker service ...
	I0421 18:52:47.903728   28793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:52:47.922624   28793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:52:47.938306   28793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:52:48.099823   28793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:52:48.259998   28793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:52:48.275500   28793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:52:48.297314   28793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:52:48.297378   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.309220   28793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:52:48.309279   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.320698   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.331654   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.342638   28793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:52:48.353786   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.364507   28793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.376690   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.387296   28793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:52:48.396941   28793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:52:48.406612   28793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:52:48.561185   28793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:52:56.261169   28793 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.699943072s)
	I0421 18:52:56.261207   28793 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:52:56.261269   28793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:52:56.268918   28793 start.go:562] Will wait 60s for crictl version
	I0421 18:52:56.268996   28793 ssh_runner.go:195] Run: which crictl
	I0421 18:52:56.273431   28793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:52:56.319738   28793 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:52:56.319819   28793 ssh_runner.go:195] Run: crio --version
	I0421 18:52:56.356193   28793 ssh_runner.go:195] Run: crio --version
	I0421 18:52:56.394157   28793 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:52:56.395489   28793 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:52:56.398231   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:56.398629   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:56.398657   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:56.398858   28793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:52:56.403932   28793 kubeadm.go:877] updating cluster {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvis
or:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:52:56.404048   28793 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:52:56.404086   28793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:52:56.450751   28793 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:52:56.450774   28793 crio.go:433] Images already preloaded, skipping extraction
	I0421 18:52:56.450818   28793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:52:56.487736   28793 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:52:56.487764   28793 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:52:56.487785   28793 kubeadm.go:928] updating node { 192.168.39.60 8443 v1.30.0 crio true true} ...
	I0421 18:52:56.487892   28793 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:52:56.487954   28793 ssh_runner.go:195] Run: crio config
	I0421 18:52:56.540756   28793 cni.go:84] Creating CNI manager for ""
	I0421 18:52:56.540777   28793 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0421 18:52:56.540789   28793 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:52:56.540807   28793 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-113226 NodeName:ha-113226 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:52:56.540944   28793 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-113226"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:52:56.540963   28793 kube-vip.go:111] generating kube-vip config ...
	I0421 18:52:56.541000   28793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:52:56.553778   28793 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:52:56.553879   28793 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:52:56.553941   28793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:52:56.564524   28793 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:52:56.564604   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0421 18:52:56.574919   28793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0421 18:52:56.593821   28793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:52:56.614542   28793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0421 18:52:56.633255   28793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 18:52:56.651926   28793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:52:56.656443   28793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:52:56.825021   28793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:52:56.843174   28793 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.60
	I0421 18:52:56.843206   28793 certs.go:194] generating shared ca certs ...
	I0421 18:52:56.843226   28793 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:52:56.843420   28793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:52:56.843469   28793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:52:56.843488   28793 certs.go:256] generating profile certs ...
	I0421 18:52:56.843602   28793 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:52:56.843647   28793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4
	I0421 18:52:56.843665   28793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.233 192.168.39.221 192.168.39.254]
	I0421 18:52:56.961003   28793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4 ...
	I0421 18:52:56.961032   28793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4: {Name:mk07572f65db96649e5689620ad024dc81367460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:52:56.961193   28793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4 ...
	I0421 18:52:56.961206   28793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4: {Name:mkb9bc9cf2fa9da84e8673ad1c03c994b9959f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:52:56.961273   28793 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:52:56.961421   28793 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:52:56.961542   28793 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:52:56.961556   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:52:56.961568   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:52:56.961577   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:52:56.961614   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:52:56.961626   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:52:56.961639   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:52:56.961650   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:52:56.961661   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:52:56.961704   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:52:56.961728   28793 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:52:56.961741   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:52:56.961764   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:52:56.961786   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:52:56.961805   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:52:56.961858   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:52:56.961885   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:52:56.961898   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:56.961910   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:52:56.962462   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:52:56.992900   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:52:57.021462   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:52:57.048047   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:52:57.075589   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0421 18:52:57.104632   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:52:57.132120   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:52:57.160882   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:52:57.188563   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:52:57.214912   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:52:57.240783   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:52:57.266176   28793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:52:57.284357   28793 ssh_runner.go:195] Run: openssl version
	I0421 18:52:57.290914   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:52:57.303135   28793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:52:57.308578   28793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:52:57.308635   28793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:52:57.315380   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:52:57.327336   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:52:57.340616   28793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:52:57.345902   28793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:52:57.345967   28793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:52:57.352700   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:52:57.365418   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:52:57.379447   28793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:57.384944   28793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:57.385019   28793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:57.391618   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:52:57.403002   28793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:52:57.408179   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 18:52:57.414516   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 18:52:57.420907   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 18:52:57.427073   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 18:52:57.433452   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 18:52:57.439470   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 18:52:57.445348   28793 kubeadm.go:391] StartCluster: {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:52:57.445466   28793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 18:52:57.445509   28793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 18:52:57.491949   28793 cri.go:89] found id: "cec5aecc2baa9aa0253ec3572fe6694d40ae706a199900f1fbd191e643bccf68"
	I0421 18:52:57.491971   28793 cri.go:89] found id: "e712bcee62861e6f7147d7647ae7bb28143301bc8962f8044e30b41e479fff83"
	I0421 18:52:57.491977   28793 cri.go:89] found id: "a8a10be9bb5c911a3c668dda6454032df65d90f1cee81ee86c6a4dae3beff46b"
	I0421 18:52:57.491983   28793 cri.go:89] found id: "e9859a052b0cdfd328208070617bb9885e431ee60338ee4aacc99886c1076168"
	I0421 18:52:57.491987   28793 cri.go:89] found id: "a34ff5cf35a2507a4d0034fd919d274ef31c3fefd6a4c29a738fde35359aa598"
	I0421 18:52:57.491991   28793 cri.go:89] found id: "3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f"
	I0421 18:52:57.491998   28793 cri.go:89] found id: "0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3"
	I0421 18:52:57.492001   28793 cri.go:89] found id: "7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3"
	I0421 18:52:57.492005   28793 cri.go:89] found id: "a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f"
	I0421 18:52:57.492016   28793 cri.go:89] found id: "6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639"
	I0421 18:52:57.492020   28793 cri.go:89] found id: "51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01"
	I0421 18:52:57.492031   28793 cri.go:89] found id: "9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c"
	I0421 18:52:57.492040   28793 cri.go:89] found id: "e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab"
	I0421 18:52:57.492048   28793 cri.go:89] found id: ""
	I0421 18:52:57.492095   28793 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.472790939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab8beb95-351d-4bcc-b2d5-3ff93c0d9700 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.474273830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d502009d-65d8-4e28-83c8-98dd79fb4bb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.474712571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725735474689266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d502009d-65d8-4e28-83c8-98dd79fb4bb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.475343618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c4daf70-1e63-40f6-afc3-6a99d267f355 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.475474114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c4daf70-1e63-40f6-afc3-6a99d267f355 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.475902417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"
metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string
{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d
71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c4daf70-1e63-40f6-afc3-6a99d267f355 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.532266163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffbc9808-19d5-4354-83a7-3f08e331010b name=/runtime.v1.RuntimeService/Version
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.532420352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffbc9808-19d5-4354-83a7-3f08e331010b name=/runtime.v1.RuntimeService/Version
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.533929855Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e015a5ac-aace-416a-8dd6-bbf73d0271fd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.534457644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725735534431491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e015a5ac-aace-416a-8dd6-bbf73d0271fd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.535066535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcbf8991-2a49-474e-aed6-a9db315ec524 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.535125443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcbf8991-2a49-474e-aed6-a9db315ec524 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.536080973Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b04152e0-2359-4cde-b025-2bcc38bc4be1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.536449968Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-vvhg8,Uid:eb008f69-72f1-4ab3-a77a-791783889db9,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725617122004141,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:44:33.719721091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-113226,Uid:f1e7a110736216d6563f961109716ab1,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1713725597774256258,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{kubernetes.io/config.hash: f1e7a110736216d6563f961109716ab1,kubernetes.io/config.seen: 2024-04-21T18:52:56.602101100Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-n8sbt,Uid:a6d836c4-74bf-4509-8ca9-8d0dea360fa2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725583439106045,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-21T18:41:08.382266512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-113226,Uid:6c27d6bd33bd9cc85fad97fe1045c356,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725583390835763,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.60:8443,kubernetes.io/config.hash: 6c27d6bd33bd9cc85fad97fe1045c356,kubernetes.io/config.seen: 2024-04-21T18:40:55.847319642Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-113226,Uid:278d9
f60f07e2ab4168c325829fe7af9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725583375505281,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 278d9f60f07e2ab4168c325829fe7af9,kubernetes.io/config.seen: 2024-04-21T18:40:55.847350875Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&PodSandboxMetadata{Name:kube-proxy-h75dp,Uid:c365aaf4-b083-4247-acd0-cc753abc9f98,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725583345599876,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c365aaf4-b083-4247-acd0-cc753abc9f98,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:41:05.862627360Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&PodSandboxMetadata{Name:etcd-ha-113226,Uid:d6896cec9347eeed5c4aeae0852d3a14,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725583340920935,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.60:2379,kubernetes.io/config.hash: d6896cec9347eeed5c4aeae0852d3a14,kubernetes.io/config.seen: 2024-04-21T18:40:55.847318004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b9697b59042c3aa2158c
1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&PodSandboxMetadata{Name:kindnet-d7vgl,Uid:d7958e8c-754e-4550-bb8f-25cf241d9179,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725583268750610,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:41:05.878260839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-113226,Uid:0bdc0bc9fa11b76392eb37c3eb09966b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725583260131133,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: P
OD,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bdc0bc9fa11b76392eb37c3eb09966b,kubernetes.io/config.seen: 2024-04-21T18:40:55.847320776Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhskp,Uid:ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725577952328920,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:41:08.364664932Z,kubernetes.io/config.source: ap
i,},RuntimeHandler:,},&PodSandbox{Id:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:aa37bc69-20f7-416c-9cb7-56430aed3215,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713725577940648782,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imageP
ullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-21T18:41:09.091015577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-vvhg8,Uid:eb008f69-72f1-4ab3-a77a-791783889db9,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713725074050010101,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:44:33.719721091Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhskp,Uid:ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713724868978977550,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:41:08.364664932Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-n8sbt,Uid:a6d836c4-74bf-4509-8ca9-8d0dea360fa2,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713724868695639532,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:41:08.382266512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&PodSandboxMetadata{Name:kube-proxy-h75dp,Uid:c365aaf4-b083-4247-acd0-cc753abc9f98,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713724866201387521,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T18:41:05.862627360Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-113226,Uid:278d9f60f07e2ab4168c325829fe7af9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713724846384558760,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 278d9f60f07e2ab4168c325829fe7af9,kubernetes.io/config.seen: 2024-04-21T18:40:45.926417277Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&PodSandboxMetadata{Name:etcd-ha-113226,Uid:d6896cec9347eeed5c4aeae0852d3a14,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713724846378973412,Labels:map[string]string{component: etcd,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.60:2379,kubernetes.io/config.hash: d6896cec9347eeed5c4aeae0852d3a14,kubernetes.io/config.seen: 2024-04-21T18:40:45.926423497Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b04152e0-2359-4cde-b025-2bcc38bc4be1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.536937238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"
metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string
{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d
71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcbf8991-2a49-474e-aed6-a9db315ec524 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.538566162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=975836ec-76ff-458d-ad5b-5e8ab225b757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.538643326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=975836ec-76ff-458d-ad5b-5e8ab225b757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.539362906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"
metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string
{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d
71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=975836ec-76ff-458d-ad5b-5e8ab225b757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.592619076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=575bb654-3db0-48dc-a5b4-55f5fa01ec86 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.592722683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=575bb654-3db0-48dc-a5b4-55f5fa01ec86 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.594035604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45f9c858-0e34-4f20-b904-26dda0b31497 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.594633031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725735594596670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45f9c858-0e34-4f20-b904-26dda0b31497 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.595809242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0b31344-39b8-45dc-99a9-5a3424398b1d name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.595870727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0b31344-39b8-45dc-99a9-5a3424398b1d name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:55:35 ha-113226 crio[4059]: time="2024-04-21 18:55:35.596358432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"
metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string
{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d
71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0b31344-39b8-45dc-99a9-5a3424398b1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8c59cf8229cb9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   b9697b59042c3       kindnet-d7vgl
	ac3545ccf2386       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            3                   f92c491180117       kube-apiserver-ha-113226
	6a975d433ed67       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   2                   68d32b69085e4       kube-controller-manager-ha-113226
	946ccec9ce1c6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   0c1137d4f8c0a       busybox-fc5497c4f-vvhg8
	42a902a7308f4       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   95aebbed003bf       kube-vip-ha-113226
	c3e186bcff628       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      2 minutes ago        Running             kube-proxy                1                   2ea2894bd3019       kube-proxy-h75dp
	0c169617913d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   34be5bbca0e6f       coredns-7db6d8ff4d-n8sbt
	c1f212d6384dd       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Exited              kube-apiserver            2                   f92c491180117       kube-apiserver-ha-113226
	141b5338a5c5c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      2 minutes ago        Running             kube-scheduler            1                   a8446a0185d7a       kube-scheduler-ha-113226
	f77a8f0a05fd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   37b7204b479e0       etcd-ha-113226
	75879d4cf5765       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               3                   b9697b59042c3       kindnet-d7vgl
	2a96397b9fbb4       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Exited              kube-controller-manager   1                   68d32b69085e4       kube-controller-manager-ha-113226
	16da795694ae0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e7a8bf497b320       coredns-7db6d8ff4d-zhskp
	31173f263a910       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   3bfb041480c05       storage-provisioner
	70f640c1c70ad       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   faa43bf489bc5       busybox-fc5497c4f-vvhg8
	3e93f6b05d337       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   65ac1d3e43166       coredns-7db6d8ff4d-zhskp
	0b5d0ab414db7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   2607d8484c47e       coredns-7db6d8ff4d-n8sbt
	7048fade386a1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      14 minutes ago       Exited              kube-proxy                0                   68e3a1db8a00b       kube-proxy-h75dp
	9224faad5a972       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   6167071453e71       etcd-ha-113226
	e5498303bb3f9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      14 minutes ago       Exited              kube-scheduler            0                   adb821c8b93f8       kube-scheduler-ha-113226
	
	
	==> coredns [0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3] <==
	[INFO] 10.244.2.2:36822 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003347s
	[INFO] 10.244.2.2:41452 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195319s
	[INFO] 10.244.2.2:60508 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074721s
	[INFO] 10.244.0.4:51454 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104908s
	[INFO] 10.244.0.4:57376 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109603s
	[INFO] 10.244.0.4:40827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078308s
	[INFO] 10.244.0.4:47256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153662s
	[INFO] 10.244.0.4:37424 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014403s
	[INFO] 10.244.0.4:57234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144257s
	[INFO] 10.244.1.2:51901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259177s
	[INFO] 10.244.1.2:44450 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123202s
	[INFO] 10.244.2.2:53556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169239s
	[INFO] 10.244.2.2:42828 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117966s
	[INFO] 10.244.2.2:51827 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137514s
	[INFO] 10.244.0.4:56918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047175s
	[INFO] 10.244.1.2:45608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118838s
	[INFO] 10.244.1.2:50713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284967s
	[INFO] 10.244.2.2:58426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000313356s
	[INFO] 10.244.2.2:39340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130525s
	[INFO] 10.244.0.4:58687 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094588s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1613568619]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:53:09.665) (total time: 10001ms):
	Trace[1613568619]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:53:19.667)
	Trace[1613568619]: [10.001757051s] [10.001757051s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47800->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47800->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47818->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47818->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1932980695]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:53:11.070) (total time: 10001ms):
	Trace[1932980695]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:53:21.072)
	Trace[1932980695]: [10.001891188s] [10.001891188s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1323223527]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:53:13.122) (total time: 10001ms):
	Trace[1323223527]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:53:23.123)
	Trace[1323223527]: [10.001302614s] [10.001302614s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35716->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35716->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f] <==
	[INFO] 10.244.1.2:41498 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146514s
	[INFO] 10.244.2.2:54180 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001245595s
	[INFO] 10.244.2.2:56702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118055s
	[INFO] 10.244.2.2:52049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103643s
	[INFO] 10.244.2.2:39892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013318s
	[INFO] 10.244.0.4:50393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617766s
	[INFO] 10.244.0.4:58125 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001163449s
	[INFO] 10.244.1.2:55583 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000370228s
	[INFO] 10.244.1.2:57237 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092539s
	[INFO] 10.244.2.2:42488 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129888s
	[INFO] 10.244.0.4:48460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104891s
	[INFO] 10.244.0.4:35562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112767s
	[INFO] 10.244.0.4:37396 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009448s
	[INFO] 10.244.1.2:40110 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268116s
	[INFO] 10.244.1.2:40165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166492s
	[INFO] 10.244.2.2:45365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014902s
	[INFO] 10.244.2.2:48282 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093124s
	[INFO] 10.244.0.4:43339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000357932s
	[INFO] 10.244.0.4:39537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086381s
	[INFO] 10.244.0.4:33649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093318s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-113226
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_40_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:40:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:55:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:41:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-113226
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f770328f068141e091b6c3dbf4a76488
	  System UUID:                f770328f-0681-41e0-91b6-c3dbf4a76488
	  Boot ID:                    bbf1e5be-35e8-4986-b694-bc173cac60e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vvhg8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-n8sbt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-zhskp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-113226                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-d7vgl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-113226             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-113226    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-h75dp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-113226             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-113226                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 110s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-113226 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-113226 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-113226 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-113226 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Warning  ContainerGCFailed        2m41s (x2 over 3m41s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           104s                   node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   RegisteredNode           94s                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   RegisteredNode           36s                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	
	
	Name:               ha-113226-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_42_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:42:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:55:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:54:31 +0000   Sun, 21 Apr 2024 18:53:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:54:31 +0000   Sun, 21 Apr 2024 18:53:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:54:31 +0000   Sun, 21 Apr 2024 18:53:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:54:31 +0000   Sun, 21 Apr 2024 18:54:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    ha-113226-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e96ca06000049ab994a1d4c31482f88
	  System UUID:                8e96ca06-0000-49ab-994a-1d4c31482f88
	  Boot ID:                    6038fb53-8a68-423d-9e50-f8692a3f2cf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-djlm5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-113226-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-4hx6j                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-113226-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-113226-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nsv74                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-113226-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-113226-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 85s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-113226-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-113226-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-113226-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  NodeNotReady             9m16s                  node-controller  Node ha-113226-m02 status is now: NodeNotReady
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node ha-113226-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s                   node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           94s                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           36s                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	
	
	Name:               ha-113226-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_44_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:44:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:55:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:55:04 +0000   Sun, 21 Apr 2024 18:54:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:55:04 +0000   Sun, 21 Apr 2024 18:54:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:55:04 +0000   Sun, 21 Apr 2024 18:54:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:55:04 +0000   Sun, 21 Apr 2024 18:54:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ha-113226-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e527acdd3b3544d5b53bced4a1abdb9a
	  System UUID:                e527acdd-3b35-44d5-b53b-ced4a1abdb9a
	  Boot ID:                    794ec30b-bd21-4c76-bdf7-93f6679a8acd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lccdt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-113226-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-rhmbs                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-113226-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-113226-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-shlwr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-113226-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-113226-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-113226-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-113226-m03 status is now: NodeNotReady
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x2 over 62s)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x2 over 62s)  kubelet          Node ha-113226-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x2 over 62s)  kubelet          Node ha-113226-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s                kubelet          Node ha-113226-m03 has been rebooted, boot id: 794ec30b-bd21-4c76-bdf7-93f6679a8acd
	  Normal   NodeReady                62s                kubelet          Node ha-113226-m03 status is now: NodeReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-113226-m03 event: Registered Node ha-113226-m03 in Controller
	
	
	Name:               ha-113226-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_45_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:45:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:55:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:55:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:55:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:55:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:55:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-113226-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1d55ce55d9e44738a42ed29cc9f1198
	  System UUID:                c1d55ce5-5d9e-4473-8a42-ed29cc9f1198
	  Boot ID:                    ebc30470-0a8b-487f-94f3-953131eede89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jkd2l       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-6s6v7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-113226-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-113226-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s               node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-113226-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-113226-m04 has been rebooted, boot id: ebc30470-0a8b-487f-94f3-953131eede89
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-113226-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-113226-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-113226-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-113226-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s                 kubelet          Node ha-113226-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.054859] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.200473] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.119933] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.314231] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.898172] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.066212] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.334925] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +1.112693] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.070346] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.082112] kauditd_printk_skb: 40 callbacks suppressed
	[Apr21 18:41] kauditd_printk_skb: 21 callbacks suppressed
	[Apr21 18:43] kauditd_printk_skb: 74 callbacks suppressed
	[Apr21 18:49] kauditd_printk_skb: 1 callbacks suppressed
	[Apr21 18:51] kauditd_printk_skb: 1 callbacks suppressed
	[Apr21 18:52] systemd-fstab-generator[3973]: Ignoring "noauto" option for root device
	[  +0.160575] systemd-fstab-generator[3986]: Ignoring "noauto" option for root device
	[  +0.200169] systemd-fstab-generator[4000]: Ignoring "noauto" option for root device
	[  +0.155198] systemd-fstab-generator[4012]: Ignoring "noauto" option for root device
	[  +0.304935] systemd-fstab-generator[4040]: Ignoring "noauto" option for root device
	[  +8.255385] systemd-fstab-generator[4144]: Ignoring "noauto" option for root device
	[  +0.098018] kauditd_printk_skb: 100 callbacks suppressed
	[Apr21 18:53] kauditd_printk_skb: 32 callbacks suppressed
	[ +14.554895] kauditd_printk_skb: 78 callbacks suppressed
	[ +28.116673] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.455822] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c] <==
	":"2024-04-21T18:51:15.336251Z","time spent":"809.592598ms","remote":"127.0.0.1:40468","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" limit:10000 "}
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-21T18:51:16.145902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:51:15.343656Z","time spent":"802.241085ms","remote":"127.0.0.1:40312","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-21T18:51:16.256755Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"798b9b0ee1342456","rtt":"11.934223ms","error":"dial tcp 192.168.39.233:2380: i/o timeout"}
	{"level":"info","ts":"2024-04-21T18:51:16.26168Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1a622f206f99396a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-21T18:51:16.261905Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.261953Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262005Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262115Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262152Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262296Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262341Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.26235Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262363Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262378Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262465Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262492Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262518Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262555Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.265498Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-04-21T18:51:16.265721Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-04-21T18:51:16.26576Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-113226","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	
	
	==> etcd [f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c] <==
	{"level":"warn","ts":"2024-04-21T18:54:28.175752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:54:28.207438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:54:28.307397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1a622f206f99396a","from":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T18:54:28.339533Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.221:2380/version","remote-member-id":"3968f0b022895f5","error":"Get \"https://192.168.39.221:2380/version\": dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:28.339672Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3968f0b022895f5","error":"Get \"https://192.168.39.221:2380/version\": dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:29.7856Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:29.785619Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:32.342041Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.221:2380/version","remote-member-id":"3968f0b022895f5","error":"Get \"https://192.168.39.221:2380/version\": dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:32.342211Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3968f0b022895f5","error":"Get \"https://192.168.39.221:2380/version\": dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:34.785838Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:34.785852Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:36.344251Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.221:2380/version","remote-member-id":"3968f0b022895f5","error":"Get \"https://192.168.39.221:2380/version\": dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:36.344366Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3968f0b022895f5","error":"Get \"https://192.168.39.221:2380/version\": dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-21T18:54:39.160327Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:54:39.163814Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:54:39.166422Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:54:39.170509Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1a622f206f99396a","to":"3968f0b022895f5","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-21T18:54:39.170571Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:54:39.170867Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1a622f206f99396a","to":"3968f0b022895f5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-21T18:54:39.170935Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:54:39.229799Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.221:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-21T18:54:39.786481Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:39.786619Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:53.405368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.1555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-21T18:54:53.40559Z","caller":"traceutil/trace.go:171","msg":"trace[264659370] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:2498; }","duration":"125.47486ms","start":"2024-04-21T18:54:53.280077Z","end":"2024-04-21T18:54:53.405552Z","steps":["trace[264659370] 'count revisions from in-memory index tree'  (duration: 123.498298ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:55:36 up 15 min,  0 users,  load average: 0.69, 0.61, 0.44
	Linux ha-113226 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b] <==
	I0421 18:53:04.516487       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0421 18:53:14.828806       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0421 18:53:16.675774       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0421 18:53:19.747937       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0421 18:53:26.385067       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.207:46190->10.96.0.1:443: read: connection reset by peer
	I0421 18:53:32.035860       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b] <==
	I0421 18:54:58.160965       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:55:08.179989       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:55:08.180059       1 main.go:227] handling current node
	I0421 18:55:08.180087       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:55:08.180093       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:55:08.180328       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:55:08.180376       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:55:08.180457       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:55:08.180464       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:55:18.197753       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:55:18.197969       1 main.go:227] handling current node
	I0421 18:55:18.198070       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:55:18.198104       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:55:18.198333       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:55:18.198364       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:55:18.198439       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:55:18.198461       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:55:28.213340       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:55:28.214272       1 main.go:227] handling current node
	I0421 18:55:28.214331       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:55:28.214399       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:55:28.214692       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0421 18:55:28.214732       1 main.go:250] Node ha-113226-m03 has CIDR [10.244.2.0/24] 
	I0421 18:55:28.214789       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:55:28.214820       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d] <==
	I0421 18:53:49.845511       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0421 18:53:49.845528       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0421 18:53:49.934346       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0421 18:53:49.934418       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0421 18:53:49.935402       1 shared_informer.go:320] Caches are synced for configmaps
	I0421 18:53:49.935502       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0421 18:53:49.936028       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 18:53:49.936576       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 18:53:49.942679       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0421 18:53:49.943004       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0421 18:53:49.944918       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0421 18:53:49.944973       1 aggregator.go:165] initial CRD sync complete...
	I0421 18:53:49.944988       1 autoregister_controller.go:141] Starting autoregister controller
	I0421 18:53:49.944993       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0421 18:53:49.944998       1 cache.go:39] Caches are synced for autoregister controller
	W0421 18:53:49.946683       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.221 192.168.39.233]
	I0421 18:53:49.946992       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 18:53:49.947037       1 policy_source.go:224] refreshing policies
	I0421 18:53:49.948234       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 18:53:49.956690       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0421 18:53:49.960771       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0421 18:53:49.991459       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 18:53:50.846981       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0421 18:53:51.380893       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.221 192.168.39.233 192.168.39.60]
	W0421 18:54:01.383063       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.233 192.168.39.60]
	
	
	==> kube-apiserver [c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73] <==
	I0421 18:53:04.736016       1 options.go:221] external host was not specified, using 192.168.39.60
	I0421 18:53:04.738855       1 server.go:148] Version: v1.30.0
	I0421 18:53:04.738993       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:53:05.362097       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0421 18:53:05.373570       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 18:53:05.378116       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0421 18:53:05.378159       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0421 18:53:05.378409       1 instance.go:299] Using reconciler: lease
	W0421 18:53:25.362136       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0421 18:53:25.362136       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0421 18:53:25.379099       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280] <==
	I0421 18:53:05.747129       1 serving.go:380] Generated self-signed cert in-memory
	I0421 18:53:06.388486       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0421 18:53:06.388539       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:53:06.390899       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0421 18:53:06.391051       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0421 18:53:06.391669       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0421 18:53:06.391760       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0421 18:53:26.395695       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.60:8443/healthz\": dial tcp 192.168.39.60:8443: connect: connection refused"
	
	
	==> kube-controller-manager [6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8] <==
	I0421 18:54:02.809940       1 shared_informer.go:320] Caches are synced for stateful set
	I0421 18:54:02.936863       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0421 18:54:02.937037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.463µs"
	I0421 18:54:02.937315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.536µs"
	I0421 18:54:02.953634       1 shared_informer.go:320] Caches are synced for attach detach
	I0421 18:54:02.966085       1 shared_informer.go:320] Caches are synced for resource quota
	I0421 18:54:02.987423       1 shared_informer.go:320] Caches are synced for resource quota
	I0421 18:54:02.999389       1 shared_informer.go:320] Caches are synced for disruption
	I0421 18:54:03.002846       1 shared_informer.go:320] Caches are synced for deployment
	I0421 18:54:03.396121       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 18:54:03.439536       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 18:54:03.439623       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0421 18:54:07.641727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.369µs"
	I0421 18:54:08.874908       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-hlkr9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-hlkr9\": the object has been modified; please apply your changes to the latest version and try again"
	I0421 18:54:08.875268       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"2d74317c-f1fe-4f27-ad20-56db993b0b3f", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-hlkr9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-hlkr9": the object has been modified; please apply your changes to the latest version and try again
	I0421 18:54:08.891144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.877125ms"
	I0421 18:54:08.891311       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.47µs"
	I0421 18:54:13.977120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.604433ms"
	I0421 18:54:13.977341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.164µs"
	I0421 18:54:32.723235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.042059ms"
	I0421 18:54:32.724411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.979µs"
	I0421 18:54:35.037523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.133µs"
	I0421 18:54:52.329752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.267381ms"
	I0421 18:54:52.329953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.563µs"
	I0421 18:55:27.277809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-113226-m04"
	
	
	==> kube-proxy [7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3] <==
	W0421 18:50:07.300572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:07.300616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:07.300639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:13.443887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:13.445246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:13.445849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:13.445922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:13.445416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:13.446239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:22.662409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:22.662544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:22.662749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:22.662810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:25.733558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:25.733625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:38.021083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:38.021389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:41.093707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:41.094509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:47.237655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:47.237772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:51:14.883908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:51:14.884046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:51:14.884148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:51:14.884243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38] <==
	E0421 18:53:26.979851       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-113226\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0421 18:53:45.412525       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-113226\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0421 18:53:45.412667       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0421 18:53:45.456125       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:53:45.456329       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:53:45.456361       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:53:45.459434       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:53:45.459880       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:53:45.460268       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:53:45.461898       1 config.go:192] "Starting service config controller"
	I0421 18:53:45.462151       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:53:45.462390       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:53:45.462423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:53:45.463158       1 config.go:319] "Starting node config controller"
	I0421 18:53:45.463289       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0421 18:53:48.486103       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0421 18:53:48.486646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:53:48.487097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:53:48.487800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:53:48.488436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:53:48.488717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:53:48.488922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0421 18:53:50.064155       1 shared_informer.go:320] Caches are synced for node config
	I0421 18:53:50.064239       1 shared_informer.go:320] Caches are synced for service config
	I0421 18:53:50.064288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5] <==
	W0421 18:53:43.190631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.60:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:43.190718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.60:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.101998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.60:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.102076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.60:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.167053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.167136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.215021       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.215096       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.358691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.358889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.677385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.677530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:45.304752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:45.304823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:45.844318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:45.844356       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:46.740958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:46.741028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:47.158760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:47.158904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:47.500823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:47.500925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:47.919561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:47.919595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	I0421 18:53:59.398524       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab] <==
	W0421 18:51:13.168618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 18:51:13.168738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 18:51:13.426700       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 18:51:13.426799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 18:51:13.624431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 18:51:13.624552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 18:51:14.004476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 18:51:14.004602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 18:51:14.070072       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 18:51:14.070244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 18:51:14.131630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 18:51:14.131761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 18:51:14.215390       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 18:51:14.215542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 18:51:14.290145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 18:51:14.290281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 18:51:14.327916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 18:51:14.327982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 18:51:14.387365       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 18:51:14.387450       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 18:51:15.202362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 18:51:15.202466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0421 18:51:16.093374       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0421 18:51:16.093525       1 run.go:74] "command failed" err="finished without leader elect"
	I0421 18:51:16.093603       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	
	==> kubelet <==
	Apr 21 18:54:03 ha-113226 kubelet[1377]: E0421 18:54:03.881045    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:54:04 ha-113226 kubelet[1377]: I0421 18:54:04.880666    1377 scope.go:117] "RemoveContainer" containerID="75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b"
	Apr 21 18:54:04 ha-113226 kubelet[1377]: E0421 18:54:04.881150    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-d7vgl_kube-system(d7958e8c-754e-4550-bb8f-25cf241d9179)\"" pod="kube-system/kindnet-d7vgl" podUID="d7958e8c-754e-4550-bb8f-25cf241d9179"
	Apr 21 18:54:14 ha-113226 kubelet[1377]: I0421 18:54:14.880997    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:54:14 ha-113226 kubelet[1377]: E0421 18:54:14.881278    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:54:16 ha-113226 kubelet[1377]: I0421 18:54:16.880952    1377 scope.go:117] "RemoveContainer" containerID="75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b"
	Apr 21 18:54:28 ha-113226 kubelet[1377]: I0421 18:54:28.881493    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:54:28 ha-113226 kubelet[1377]: E0421 18:54:28.882317    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:54:32 ha-113226 kubelet[1377]: I0421 18:54:32.880730    1377 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-113226" podUID="a290fa40-f3a8-4995-87e6-00ae61ba51b5"
	Apr 21 18:54:32 ha-113226 kubelet[1377]: I0421 18:54:32.905220    1377 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-113226"
	Apr 21 18:54:42 ha-113226 kubelet[1377]: I0421 18:54:42.880132    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:54:42 ha-113226 kubelet[1377]: E0421 18:54:42.880757    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:54:54 ha-113226 kubelet[1377]: I0421 18:54:54.880777    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:54:54 ha-113226 kubelet[1377]: E0421 18:54:54.881525    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:54:55 ha-113226 kubelet[1377]: E0421 18:54:55.929003    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:54:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:54:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:54:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:54:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:55:05 ha-113226 kubelet[1377]: I0421 18:55:05.882018    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:55:05 ha-113226 kubelet[1377]: E0421 18:55:05.882528    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:55:20 ha-113226 kubelet[1377]: I0421 18:55:20.880477    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:55:20 ha-113226 kubelet[1377]: E0421 18:55:20.880780    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:55:35 ha-113226 kubelet[1377]: I0421 18:55:35.881759    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:55:35 ha-113226 kubelet[1377]: E0421 18:55:35.882336    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 18:55:35.085126   30135 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18702-3854/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-113226 -n ha-113226
helpers_test.go:261: (dbg) Run:  kubectl --context ha-113226 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 stop -v=7 --alsologtostderr
E0421 18:56:09.208429   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 stop -v=7 --alsologtostderr: exit status 82 (2m0.493460024s)

                                                
                                                
-- stdout --
	* Stopping node "ha-113226-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:55:55.504524   30542 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:55:55.504668   30542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:55:55.504681   30542 out.go:304] Setting ErrFile to fd 2...
	I0421 18:55:55.504689   30542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:55:55.504854   30542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:55:55.505072   30542 out.go:298] Setting JSON to false
	I0421 18:55:55.505146   30542 mustload.go:65] Loading cluster: ha-113226
	I0421 18:55:55.505484   30542 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:55:55.505570   30542 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:55:55.505744   30542 mustload.go:65] Loading cluster: ha-113226
	I0421 18:55:55.505867   30542 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:55:55.505891   30542 stop.go:39] StopHost: ha-113226-m04
	I0421 18:55:55.506290   30542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:55:55.506327   30542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:55:55.521716   30542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I0421 18:55:55.522251   30542 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:55:55.522900   30542 main.go:141] libmachine: Using API Version  1
	I0421 18:55:55.522926   30542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:55:55.523317   30542 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:55:55.525670   30542 out.go:177] * Stopping node "ha-113226-m04"  ...
	I0421 18:55:55.526944   30542 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0421 18:55:55.526985   30542 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:55:55.527188   30542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0421 18:55:55.527217   30542 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:55:55.530125   30542 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:55:55.530534   30542 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:55:21 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:55:55.530562   30542 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:55:55.530686   30542 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:55:55.530871   30542 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:55:55.531098   30542 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:55:55.531245   30542 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	I0421 18:55:55.622410   30542 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0421 18:55:55.676748   30542 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0421 18:55:55.731285   30542 main.go:141] libmachine: Stopping "ha-113226-m04"...
	I0421 18:55:55.731321   30542 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:55:55.732860   30542 main.go:141] libmachine: (ha-113226-m04) Calling .Stop
	I0421 18:55:55.736137   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 0/120
	I0421 18:55:56.737536   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 1/120
	I0421 18:55:57.738795   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 2/120
	I0421 18:55:58.740067   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 3/120
	I0421 18:55:59.741214   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 4/120
	I0421 18:56:00.743050   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 5/120
	I0421 18:56:01.744324   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 6/120
	I0421 18:56:02.745606   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 7/120
	I0421 18:56:03.747058   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 8/120
	I0421 18:56:04.748317   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 9/120
	I0421 18:56:05.750324   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 10/120
	I0421 18:56:06.752701   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 11/120
	I0421 18:56:07.754193   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 12/120
	I0421 18:56:08.756590   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 13/120
	I0421 18:56:09.757905   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 14/120
	I0421 18:56:10.759489   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 15/120
	I0421 18:56:11.760837   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 16/120
	I0421 18:56:12.762191   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 17/120
	I0421 18:56:13.763587   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 18/120
	I0421 18:56:14.765887   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 19/120
	I0421 18:56:15.767680   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 20/120
	I0421 18:56:16.769452   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 21/120
	I0421 18:56:17.770998   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 22/120
	I0421 18:56:18.772351   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 23/120
	I0421 18:56:19.773697   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 24/120
	I0421 18:56:20.775874   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 25/120
	I0421 18:56:21.777226   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 26/120
	I0421 18:56:22.779361   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 27/120
	I0421 18:56:23.780794   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 28/120
	I0421 18:56:24.782382   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 29/120
	I0421 18:56:25.784358   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 30/120
	I0421 18:56:26.785718   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 31/120
	I0421 18:56:27.787200   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 32/120
	I0421 18:56:28.789011   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 33/120
	I0421 18:56:29.790553   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 34/120
	I0421 18:56:30.792641   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 35/120
	I0421 18:56:31.794037   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 36/120
	I0421 18:56:32.796219   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 37/120
	I0421 18:56:33.797802   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 38/120
	I0421 18:56:34.799132   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 39/120
	I0421 18:56:35.801362   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 40/120
	I0421 18:56:36.802799   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 41/120
	I0421 18:56:37.804640   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 42/120
	I0421 18:56:38.806229   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 43/120
	I0421 18:56:39.807661   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 44/120
	I0421 18:56:40.809786   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 45/120
	I0421 18:56:41.811357   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 46/120
	I0421 18:56:42.812857   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 47/120
	I0421 18:56:43.814111   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 48/120
	I0421 18:56:44.815392   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 49/120
	I0421 18:56:45.817070   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 50/120
	I0421 18:56:46.819561   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 51/120
	I0421 18:56:47.821043   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 52/120
	I0421 18:56:48.822679   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 53/120
	I0421 18:56:49.824013   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 54/120
	I0421 18:56:50.825811   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 55/120
	I0421 18:56:51.827084   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 56/120
	I0421 18:56:52.829349   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 57/120
	I0421 18:56:53.830658   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 58/120
	I0421 18:56:54.832135   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 59/120
	I0421 18:56:55.834113   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 60/120
	I0421 18:56:56.835405   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 61/120
	I0421 18:56:57.836872   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 62/120
	I0421 18:56:58.839311   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 63/120
	I0421 18:56:59.840941   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 64/120
	I0421 18:57:00.842886   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 65/120
	I0421 18:57:01.844345   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 66/120
	I0421 18:57:02.845795   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 67/120
	I0421 18:57:03.847575   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 68/120
	I0421 18:57:04.848926   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 69/120
	I0421 18:57:05.850375   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 70/120
	I0421 18:57:06.851767   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 71/120
	I0421 18:57:07.853426   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 72/120
	I0421 18:57:08.855622   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 73/120
	I0421 18:57:09.856893   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 74/120
	I0421 18:57:10.858803   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 75/120
	I0421 18:57:11.860623   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 76/120
	I0421 18:57:12.862172   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 77/120
	I0421 18:57:13.864706   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 78/120
	I0421 18:57:14.866111   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 79/120
	I0421 18:57:15.868343   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 80/120
	I0421 18:57:16.869780   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 81/120
	I0421 18:57:17.871030   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 82/120
	I0421 18:57:18.872498   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 83/120
	I0421 18:57:19.874014   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 84/120
	I0421 18:57:20.875856   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 85/120
	I0421 18:57:21.877966   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 86/120
	I0421 18:57:22.879369   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 87/120
	I0421 18:57:23.880611   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 88/120
	I0421 18:57:24.882141   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 89/120
	I0421 18:57:25.884226   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 90/120
	I0421 18:57:26.886206   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 91/120
	I0421 18:57:27.888601   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 92/120
	I0421 18:57:28.890152   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 93/120
	I0421 18:57:29.891613   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 94/120
	I0421 18:57:30.893687   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 95/120
	I0421 18:57:31.895639   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 96/120
	I0421 18:57:32.897289   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 97/120
	I0421 18:57:33.899156   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 98/120
	I0421 18:57:34.901005   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 99/120
	I0421 18:57:35.902499   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 100/120
	I0421 18:57:36.904745   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 101/120
	I0421 18:57:37.906803   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 102/120
	I0421 18:57:38.908475   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 103/120
	I0421 18:57:39.910555   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 104/120
	I0421 18:57:40.912173   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 105/120
	I0421 18:57:41.913535   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 106/120
	I0421 18:57:42.915400   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 107/120
	I0421 18:57:43.916710   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 108/120
	I0421 18:57:44.918106   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 109/120
	I0421 18:57:45.920130   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 110/120
	I0421 18:57:46.922090   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 111/120
	I0421 18:57:47.923397   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 112/120
	I0421 18:57:48.924884   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 113/120
	I0421 18:57:49.926319   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 114/120
	I0421 18:57:50.928109   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 115/120
	I0421 18:57:51.929793   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 116/120
	I0421 18:57:52.931495   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 117/120
	I0421 18:57:53.933236   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 118/120
	I0421 18:57:54.934436   30542 main.go:141] libmachine: (ha-113226-m04) Waiting for machine to stop 119/120
	I0421 18:57:55.935976   30542 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0421 18:57:55.936047   30542 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0421 18:57:55.938163   30542 out.go:177] 
	W0421 18:57:55.939726   30542 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0421 18:57:55.939752   30542 out.go:239] * 
	* 
	W0421 18:57:55.941982   30542 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 18:57:55.943410   30542 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-113226 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr: exit status 3 (19.064863481s)

                                                
                                                
-- stdout --
	ha-113226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-113226-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:57:56.005372   30971 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:57:56.005538   30971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:57:56.005551   30971 out.go:304] Setting ErrFile to fd 2...
	I0421 18:57:56.005558   30971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:57:56.005786   30971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:57:56.005988   30971 out.go:298] Setting JSON to false
	I0421 18:57:56.006016   30971 mustload.go:65] Loading cluster: ha-113226
	I0421 18:57:56.006082   30971 notify.go:220] Checking for updates...
	I0421 18:57:56.006597   30971 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:57:56.006618   30971 status.go:255] checking status of ha-113226 ...
	I0421 18:57:56.007078   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.007151   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.028813   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0421 18:57:56.029245   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.029806   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.029834   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.030148   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.030326   30971 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:57:56.031787   30971 status.go:330] ha-113226 host status = "Running" (err=<nil>)
	I0421 18:57:56.031812   30971 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:57:56.032071   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.032103   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.047186   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
	I0421 18:57:56.047690   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.048173   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.048195   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.048522   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.048703   30971 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:57:56.051453   30971 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:57:56.051936   30971 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:57:56.051978   30971 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:57:56.052070   30971 host.go:66] Checking if "ha-113226" exists ...
	I0421 18:57:56.052380   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.052425   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.068045   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43547
	I0421 18:57:56.068416   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.068907   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.068929   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.069230   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.069414   30971 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:57:56.069562   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:57:56.069598   30971 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:57:56.072397   30971 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:57:56.072887   30971 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:57:56.072906   30971 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:57:56.073064   30971 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:57:56.073236   30971 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:57:56.073381   30971 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:57:56.073530   30971 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:57:56.165109   30971 ssh_runner.go:195] Run: systemctl --version
	I0421 18:57:56.173991   30971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:57:56.192454   30971 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:57:56.192480   30971 api_server.go:166] Checking apiserver status ...
	I0421 18:57:56.192513   30971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:57:56.212223   30971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5471/cgroup
	W0421 18:57:56.224386   30971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5471/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:57:56.224439   30971 ssh_runner.go:195] Run: ls
	I0421 18:57:56.229822   30971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:57:56.235996   30971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:57:56.236024   30971 status.go:422] ha-113226 apiserver status = Running (err=<nil>)
	I0421 18:57:56.236036   30971 status.go:257] ha-113226 status: &{Name:ha-113226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:57:56.236057   30971 status.go:255] checking status of ha-113226-m02 ...
	I0421 18:57:56.236463   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.236497   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.251109   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36419
	I0421 18:57:56.251600   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.252144   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.252183   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.252552   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.252758   30971 main.go:141] libmachine: (ha-113226-m02) Calling .GetState
	I0421 18:57:56.254245   30971 status.go:330] ha-113226-m02 host status = "Running" (err=<nil>)
	I0421 18:57:56.254262   30971 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:57:56.254647   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.254704   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.269971   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I0421 18:57:56.270460   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.271033   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.271065   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.271472   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.271695   30971 main.go:141] libmachine: (ha-113226-m02) Calling .GetIP
	I0421 18:57:56.274565   30971 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:57:56.275067   30971 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:53:09 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:57:56.275093   30971 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:57:56.275301   30971 host.go:66] Checking if "ha-113226-m02" exists ...
	I0421 18:57:56.275696   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.275750   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.290527   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37885
	I0421 18:57:56.290941   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.291418   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.291438   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.291724   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.291966   30971 main.go:141] libmachine: (ha-113226-m02) Calling .DriverName
	I0421 18:57:56.292161   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:57:56.292183   30971 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHHostname
	I0421 18:57:56.295112   30971 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:57:56.295514   30971 main.go:141] libmachine: (ha-113226-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2c:56", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:53:09 +0000 UTC Type:0 Mac:52:54:00:4f:2c:56 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-113226-m02 Clientid:01:52:54:00:4f:2c:56}
	I0421 18:57:56.295534   30971 main.go:141] libmachine: (ha-113226-m02) DBG | domain ha-113226-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:4f:2c:56 in network mk-ha-113226
	I0421 18:57:56.295697   30971 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHPort
	I0421 18:57:56.295856   30971 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHKeyPath
	I0421 18:57:56.295998   30971 main.go:141] libmachine: (ha-113226-m02) Calling .GetSSHUsername
	I0421 18:57:56.296150   30971 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m02/id_rsa Username:docker}
	I0421 18:57:56.385404   30971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:57:56.406526   30971 kubeconfig.go:125] found "ha-113226" server: "https://192.168.39.254:8443"
	I0421 18:57:56.406550   30971 api_server.go:166] Checking apiserver status ...
	I0421 18:57:56.406580   30971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:57:56.423240   30971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0421 18:57:56.435317   30971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:57:56.435369   30971 ssh_runner.go:195] Run: ls
	I0421 18:57:56.441481   30971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0421 18:57:56.445897   30971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0421 18:57:56.445923   30971 status.go:422] ha-113226-m02 apiserver status = Running (err=<nil>)
	I0421 18:57:56.445933   30971 status.go:257] ha-113226-m02 status: &{Name:ha-113226-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 18:57:56.445951   30971 status.go:255] checking status of ha-113226-m04 ...
	I0421 18:57:56.446330   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.446365   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.461189   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0421 18:57:56.461692   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.462212   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.462237   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.462572   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.462761   30971 main.go:141] libmachine: (ha-113226-m04) Calling .GetState
	I0421 18:57:56.464181   30971 status.go:330] ha-113226-m04 host status = "Running" (err=<nil>)
	I0421 18:57:56.464195   30971 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:57:56.464468   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.464508   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.480233   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0421 18:57:56.480654   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.481185   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.481214   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.481547   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.481759   30971 main.go:141] libmachine: (ha-113226-m04) Calling .GetIP
	I0421 18:57:56.484395   30971 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:57:56.484803   30971 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:55:21 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:57:56.484831   30971 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:57:56.484957   30971 host.go:66] Checking if "ha-113226-m04" exists ...
	I0421 18:57:56.485230   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:57:56.485263   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:57:56.500575   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
	I0421 18:57:56.501139   30971 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:57:56.501628   30971 main.go:141] libmachine: Using API Version  1
	I0421 18:57:56.501653   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:57:56.501970   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:57:56.502146   30971 main.go:141] libmachine: (ha-113226-m04) Calling .DriverName
	I0421 18:57:56.502319   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 18:57:56.502343   30971 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHHostname
	I0421 18:57:56.505256   30971 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:57:56.505698   30971 main.go:141] libmachine: (ha-113226-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:15:34", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:55:21 +0000 UTC Type:0 Mac:52:54:00:b1:15:34 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-113226-m04 Clientid:01:52:54:00:b1:15:34}
	I0421 18:57:56.505736   30971 main.go:141] libmachine: (ha-113226-m04) DBG | domain ha-113226-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b1:15:34 in network mk-ha-113226
	I0421 18:57:56.505894   30971 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHPort
	I0421 18:57:56.506090   30971 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHKeyPath
	I0421 18:57:56.506244   30971 main.go:141] libmachine: (ha-113226-m04) Calling .GetSSHUsername
	I0421 18:57:56.506415   30971 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226-m04/id_rsa Username:docker}
	W0421 18:58:15.010273   30971 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.20:22: connect: no route to host
	W0421 18:58:15.010371   30971 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.20:22: connect: no route to host
	E0421 18:58:15.010385   30971 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.20:22: connect: no route to host
	I0421 18:58:15.010398   30971 status.go:257] ha-113226-m04 status: &{Name:ha-113226-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0421 18:58:15.010414   30971 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.20:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-113226 -n ha-113226
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-113226 logs -n 25: (1.895382919s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m04 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp testdata/cp-test.txt                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226:/home/docker/cp-test_ha-113226-m04_ha-113226.txt                       |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226 sudo cat                                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226.txt                                 |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m02:/home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m02 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m03:/home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n                                                                 | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | ha-113226-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-113226 ssh -n ha-113226-m03 sudo cat                                          | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-113226 node stop m02 -v=7                                                     | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-113226 node start m02 -v=7                                                    | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-113226 -v=7                                                           | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-113226 -v=7                                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-113226 --wait=true -v=7                                                    | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:51 UTC | 21 Apr 24 18:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-113226                                                                | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:55 UTC |                     |
	| node    | ha-113226 node delete m03 -v=7                                                   | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:55 UTC | 21 Apr 24 18:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-113226 stop -v=7                                                              | ha-113226 | jenkins | v1.33.0 | 21 Apr 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:51:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:51:15.210483   28793 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:51:15.210608   28793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:51:15.210617   28793 out.go:304] Setting ErrFile to fd 2...
	I0421 18:51:15.210621   28793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:51:15.210825   28793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:51:15.211357   28793 out.go:298] Setting JSON to false
	I0421 18:51:15.212847   28793 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1973,"bootTime":1713723502,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:51:15.213145   28793 start.go:139] virtualization: kvm guest
	I0421 18:51:15.215341   28793 out.go:177] * [ha-113226] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:51:15.216599   28793 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:51:15.216622   28793 notify.go:220] Checking for updates...
	I0421 18:51:15.217828   28793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:51:15.219184   28793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:51:15.220509   28793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:51:15.221764   28793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:51:15.223076   28793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:51:15.224752   28793 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:51:15.224852   28793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:51:15.225293   28793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:51:15.225335   28793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:51:15.240365   28793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I0421 18:51:15.240835   28793 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:51:15.241385   28793 main.go:141] libmachine: Using API Version  1
	I0421 18:51:15.241410   28793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:51:15.241726   28793 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:51:15.241910   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:51:15.280558   28793 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 18:51:15.281869   28793 start.go:297] selected driver: kvm2
	I0421 18:51:15.281884   28793 start.go:901] validating driver "kvm2" against &{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-11
3226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth
:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:51:15.282105   28793 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:51:15.282455   28793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:51:15.282530   28793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:51:15.299488   28793 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:51:15.300171   28793 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:51:15.300230   28793 cni.go:84] Creating CNI manager for ""
	I0421 18:51:15.300241   28793 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0421 18:51:15.300300   28793 start.go:340] cluster config:
	{Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:51:15.300433   28793 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:51:15.302753   28793 out.go:177] * Starting "ha-113226" primary control-plane node in "ha-113226" cluster
	I0421 18:51:15.303855   28793 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:51:15.303895   28793 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:51:15.303917   28793 cache.go:56] Caching tarball of preloaded images
	I0421 18:51:15.304008   28793 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 18:51:15.304023   28793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 18:51:15.304225   28793 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/config.json ...
	I0421 18:51:15.304475   28793 start.go:360] acquireMachinesLock for ha-113226: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:51:15.304542   28793 start.go:364] duration metric: took 40.286µs to acquireMachinesLock for "ha-113226"
	I0421 18:51:15.304562   28793 start.go:96] Skipping create...Using existing machine configuration
	I0421 18:51:15.304572   28793 fix.go:54] fixHost starting: 
	I0421 18:51:15.304876   28793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:51:15.304918   28793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:51:15.319327   28793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34983
	I0421 18:51:15.319691   28793 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:51:15.320155   28793 main.go:141] libmachine: Using API Version  1
	I0421 18:51:15.320178   28793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:51:15.320494   28793 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:51:15.320692   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:51:15.320826   28793 main.go:141] libmachine: (ha-113226) Calling .GetState
	I0421 18:51:15.322375   28793 fix.go:112] recreateIfNeeded on ha-113226: state=Running err=<nil>
	W0421 18:51:15.322393   28793 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 18:51:15.324160   28793 out.go:177] * Updating the running kvm2 "ha-113226" VM ...
	I0421 18:51:15.325286   28793 machine.go:94] provisionDockerMachine start ...
	I0421 18:51:15.325303   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:51:15.325515   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.328112   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.328610   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.328641   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.328797   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.328954   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.329124   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.329241   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.329386   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.329619   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.329633   28793 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 18:51:15.443514   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226
	
	I0421 18:51:15.443537   28793 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:51:15.443777   28793 buildroot.go:166] provisioning hostname "ha-113226"
	I0421 18:51:15.443801   28793 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:51:15.444006   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.446790   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.447148   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.447176   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.447313   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.447495   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.447644   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.447785   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.447936   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.448115   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.448127   28793 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-113226 && echo "ha-113226" | sudo tee /etc/hostname
	I0421 18:51:15.572465   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-113226
	
	I0421 18:51:15.572496   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.575227   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.575633   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.575664   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.575830   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.576037   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.576163   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.576282   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.576427   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.576585   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.576601   28793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-113226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-113226/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-113226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:51:15.695540   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:51:15.695586   28793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 18:51:15.695620   28793 buildroot.go:174] setting up certificates
	I0421 18:51:15.695633   28793 provision.go:84] configureAuth start
	I0421 18:51:15.695641   28793 main.go:141] libmachine: (ha-113226) Calling .GetMachineName
	I0421 18:51:15.695876   28793 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:51:15.698449   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.698815   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.698840   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.698985   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.701045   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.701406   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.701436   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.701557   28793 provision.go:143] copyHostCerts
	I0421 18:51:15.701593   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:51:15.701625   28793 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 18:51:15.701650   28793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 18:51:15.701721   28793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 18:51:15.701789   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:51:15.701808   28793 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 18:51:15.701815   28793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 18:51:15.701837   28793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 18:51:15.701881   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:51:15.701925   28793 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 18:51:15.701934   28793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 18:51:15.701958   28793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 18:51:15.702001   28793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.ha-113226 san=[127.0.0.1 192.168.39.60 ha-113226 localhost minikube]
	I0421 18:51:15.805246   28793 provision.go:177] copyRemoteCerts
	I0421 18:51:15.805299   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:51:15.805320   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.807862   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.808218   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.808248   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.808411   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.808591   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.808782   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.808912   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:51:15.894966   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 18:51:15.895032   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:51:15.930596   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 18:51:15.930665   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0421 18:51:15.960170   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 18:51:15.960234   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 18:51:15.987760   28793 provision.go:87] duration metric: took 292.11615ms to configureAuth
	I0421 18:51:15.987783   28793 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:51:15.987985   28793 config.go:182] Loaded profile config "ha-113226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:51:15.988094   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:51:15.990566   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.990921   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:51:15.990947   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:51:15.991077   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:51:15.991267   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.991402   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:51:15.991564   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:51:15.991774   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:51:15.991923   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:51:15.991937   28793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 18:52:46.906791   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 18:52:46.906813   28793 machine.go:97] duration metric: took 1m31.581515123s to provisionDockerMachine
	I0421 18:52:46.906824   28793 start.go:293] postStartSetup for "ha-113226" (driver="kvm2")
	I0421 18:52:46.906834   28793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:52:46.906846   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:46.907222   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:52:46.907248   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:46.910424   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:46.910912   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:46.910940   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:46.911084   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:46.911273   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:46.911412   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:46.911561   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:52:46.993882   28793 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:52:46.998743   28793 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:52:46.998772   28793 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 18:52:46.998846   28793 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 18:52:46.998959   28793 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 18:52:46.998972   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 18:52:46.999051   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 18:52:47.009247   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:52:47.036322   28793 start.go:296] duration metric: took 129.488493ms for postStartSetup
	I0421 18:52:47.036355   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.036618   28793 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0421 18:52:47.036645   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.039253   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.039673   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.039694   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.039820   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.040004   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.040204   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.040352   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	W0421 18:52:47.120679   28793 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0421 18:52:47.120703   28793 fix.go:56] duration metric: took 1m31.816132887s for fixHost
	I0421 18:52:47.120722   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.123463   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.123808   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.123839   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.124057   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.124251   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.124422   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.124561   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.124734   28793 main.go:141] libmachine: Using SSH client type: native
	I0421 18:52:47.124940   28793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0421 18:52:47.124951   28793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:52:47.230883   28793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713725567.180811629
	
	I0421 18:52:47.230909   28793 fix.go:216] guest clock: 1713725567.180811629
	I0421 18:52:47.230921   28793 fix.go:229] Guest: 2024-04-21 18:52:47.180811629 +0000 UTC Remote: 2024-04-21 18:52:47.120709476 +0000 UTC m=+91.954605676 (delta=60.102153ms)
	I0421 18:52:47.230976   28793 fix.go:200] guest clock delta is within tolerance: 60.102153ms
	I0421 18:52:47.230990   28793 start.go:83] releasing machines lock for "ha-113226", held for 1m31.926434241s
	I0421 18:52:47.231041   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.231324   28793 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:52:47.233757   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.234153   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.234176   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.234357   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.234875   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.235054   28793 main.go:141] libmachine: (ha-113226) Calling .DriverName
	I0421 18:52:47.235162   28793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:52:47.235202   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.235299   28793 ssh_runner.go:195] Run: cat /version.json
	I0421 18:52:47.235324   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHHostname
	I0421 18:52:47.237774   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.237838   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.238168   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.238195   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.238230   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:47.238248   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:47.238394   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.238508   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHPort
	I0421 18:52:47.238573   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.238639   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHKeyPath
	I0421 18:52:47.238728   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.238788   28793 main.go:141] libmachine: (ha-113226) Calling .GetSSHUsername
	I0421 18:52:47.238866   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:52:47.238969   28793 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/ha-113226/id_rsa Username:docker}
	I0421 18:52:47.316550   28793 ssh_runner.go:195] Run: systemctl --version
	I0421 18:52:47.341717   28793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 18:52:47.503572   28793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:52:47.512142   28793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:52:47.512194   28793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:52:47.522115   28793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0421 18:52:47.522135   28793 start.go:494] detecting cgroup driver to use...
	I0421 18:52:47.522187   28793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:52:47.539204   28793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:52:47.554220   28793 docker.go:217] disabling cri-docker service (if available) ...
	I0421 18:52:47.554262   28793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 18:52:47.568599   28793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 18:52:47.582768   28793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 18:52:47.739606   28793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 18:52:47.903652   28793 docker.go:233] disabling docker service ...
	I0421 18:52:47.903728   28793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 18:52:47.922624   28793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 18:52:47.938306   28793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 18:52:48.099823   28793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 18:52:48.259998   28793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 18:52:48.275500   28793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:52:48.297314   28793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 18:52:48.297378   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.309220   28793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 18:52:48.309279   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.320698   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.331654   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.342638   28793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:52:48.353786   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.364507   28793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.376690   28793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 18:52:48.387296   28793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:52:48.396941   28793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:52:48.406612   28793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:52:48.561185   28793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 18:52:56.261169   28793 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.699943072s)
	I0421 18:52:56.261207   28793 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 18:52:56.261269   28793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 18:52:56.268918   28793 start.go:562] Will wait 60s for crictl version
	I0421 18:52:56.268996   28793 ssh_runner.go:195] Run: which crictl
	I0421 18:52:56.273431   28793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:52:56.319738   28793 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 18:52:56.319819   28793 ssh_runner.go:195] Run: crio --version
	I0421 18:52:56.356193   28793 ssh_runner.go:195] Run: crio --version
	I0421 18:52:56.394157   28793 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 18:52:56.395489   28793 main.go:141] libmachine: (ha-113226) Calling .GetIP
	I0421 18:52:56.398231   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:56.398629   28793 main.go:141] libmachine: (ha-113226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6a:b5", ip: ""} in network mk-ha-113226: {Iface:virbr1 ExpiryTime:2024-04-21 19:40:27 +0000 UTC Type:0 Mac:52:54:00:3d:6a:b5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-113226 Clientid:01:52:54:00:3d:6a:b5}
	I0421 18:52:56.398657   28793 main.go:141] libmachine: (ha-113226) DBG | domain ha-113226 has defined IP address 192.168.39.60 and MAC address 52:54:00:3d:6a:b5 in network mk-ha-113226
	I0421 18:52:56.398858   28793 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 18:52:56.403932   28793 kubeadm.go:877] updating cluster {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvis
or:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:52:56.404048   28793 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:52:56.404086   28793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:52:56.450751   28793 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:52:56.450774   28793 crio.go:433] Images already preloaded, skipping extraction
	I0421 18:52:56.450818   28793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 18:52:56.487736   28793 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 18:52:56.487764   28793 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:52:56.487785   28793 kubeadm.go:928] updating node { 192.168.39.60 8443 v1.30.0 crio true true} ...
	I0421 18:52:56.487892   28793 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-113226 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:52:56.487954   28793 ssh_runner.go:195] Run: crio config
	I0421 18:52:56.540756   28793 cni.go:84] Creating CNI manager for ""
	I0421 18:52:56.540777   28793 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0421 18:52:56.540789   28793 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:52:56.540807   28793 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-113226 NodeName:ha-113226 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:52:56.540944   28793 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-113226"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:52:56.540963   28793 kube-vip.go:111] generating kube-vip config ...
	I0421 18:52:56.541000   28793 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 18:52:56.553778   28793 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 18:52:56.553879   28793 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 18:52:56.553941   28793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:52:56.564524   28793 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:52:56.564604   28793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0421 18:52:56.574919   28793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0421 18:52:56.593821   28793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:52:56.614542   28793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0421 18:52:56.633255   28793 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 18:52:56.651926   28793 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0421 18:52:56.656443   28793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:52:56.825021   28793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:52:56.843174   28793 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226 for IP: 192.168.39.60
	I0421 18:52:56.843206   28793 certs.go:194] generating shared ca certs ...
	I0421 18:52:56.843226   28793 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:52:56.843420   28793 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 18:52:56.843469   28793 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 18:52:56.843488   28793 certs.go:256] generating profile certs ...
	I0421 18:52:56.843602   28793 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/client.key
	I0421 18:52:56.843647   28793 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4
	I0421 18:52:56.843665   28793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.60 192.168.39.233 192.168.39.221 192.168.39.254]
	I0421 18:52:56.961003   28793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4 ...
	I0421 18:52:56.961032   28793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4: {Name:mk07572f65db96649e5689620ad024dc81367460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:52:56.961193   28793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4 ...
	I0421 18:52:56.961206   28793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4: {Name:mkb9bc9cf2fa9da84e8673ad1c03c994b9959f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:52:56.961273   28793 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt.7a2b46f4 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt
	I0421 18:52:56.961421   28793 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key.7a2b46f4 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key
	I0421 18:52:56.961542   28793 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key
	I0421 18:52:56.961556   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:52:56.961568   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:52:56.961577   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:52:56.961614   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:52:56.961626   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:52:56.961639   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:52:56.961650   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:52:56.961661   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:52:56.961704   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 18:52:56.961728   28793 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 18:52:56.961741   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 18:52:56.961764   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 18:52:56.961786   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 18:52:56.961805   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 18:52:56.961858   28793 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 18:52:56.961885   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 18:52:56.961898   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:56.961910   28793 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 18:52:56.962462   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:52:56.992900   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:52:57.021462   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:52:57.048047   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:52:57.075589   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0421 18:52:57.104632   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:52:57.132120   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:52:57.160882   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/ha-113226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:52:57.188563   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 18:52:57.214912   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:52:57.240783   28793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 18:52:57.266176   28793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:52:57.284357   28793 ssh_runner.go:195] Run: openssl version
	I0421 18:52:57.290914   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 18:52:57.303135   28793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 18:52:57.308578   28793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 18:52:57.308635   28793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 18:52:57.315380   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 18:52:57.327336   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 18:52:57.340616   28793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 18:52:57.345902   28793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 18:52:57.345967   28793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 18:52:57.352700   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:52:57.365418   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:52:57.379447   28793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:57.384944   28793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:57.385019   28793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:52:57.391618   28793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:52:57.403002   28793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:52:57.408179   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 18:52:57.414516   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 18:52:57.420907   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 18:52:57.427073   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 18:52:57.433452   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 18:52:57.439470   28793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 18:52:57.445348   28793 kubeadm.go:391] StartCluster: {Name:ha-113226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-113226 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:52:57.445466   28793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 18:52:57.445509   28793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 18:52:57.491949   28793 cri.go:89] found id: "cec5aecc2baa9aa0253ec3572fe6694d40ae706a199900f1fbd191e643bccf68"
	I0421 18:52:57.491971   28793 cri.go:89] found id: "e712bcee62861e6f7147d7647ae7bb28143301bc8962f8044e30b41e479fff83"
	I0421 18:52:57.491977   28793 cri.go:89] found id: "a8a10be9bb5c911a3c668dda6454032df65d90f1cee81ee86c6a4dae3beff46b"
	I0421 18:52:57.491983   28793 cri.go:89] found id: "e9859a052b0cdfd328208070617bb9885e431ee60338ee4aacc99886c1076168"
	I0421 18:52:57.491987   28793 cri.go:89] found id: "a34ff5cf35a2507a4d0034fd919d274ef31c3fefd6a4c29a738fde35359aa598"
	I0421 18:52:57.491991   28793 cri.go:89] found id: "3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f"
	I0421 18:52:57.491998   28793 cri.go:89] found id: "0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3"
	I0421 18:52:57.492001   28793 cri.go:89] found id: "7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3"
	I0421 18:52:57.492005   28793 cri.go:89] found id: "a95e4d8a09dd58b9a4d7ca7ffc121b48512788dfbf54c0ce583f916ca471d87f"
	I0421 18:52:57.492016   28793 cri.go:89] found id: "6ebd07febd8dc7db401cd62248ae11f4dc752644d8a30e59dd82b656335cf639"
	I0421 18:52:57.492020   28793 cri.go:89] found id: "51aef143989131476630badb20072da7008f83d60e08a5f148f26bd1708d0f01"
	I0421 18:52:57.492031   28793 cri.go:89] found id: "9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c"
	I0421 18:52:57.492040   28793 cri.go:89] found id: "e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab"
	I0421 18:52:57.492048   28793 cri.go:89] found id: ""
	I0421 18:52:57.492095   28793 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.652471053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725895652443519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50ae21d9-7359-4682-ba23-6d6940fb0049 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.653066161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=797af335-46b8-4977-9617-1e6c92e83eda name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.653126256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=797af335-46b8-4977-9617-1e6c92e83eda name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.653800104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61e39576b630afc0211e05794511293263fbb1ade63d44ef7cd0cce037ff697,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713725747894942910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 6,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubern
etes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernete
s.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=797af335-46b8-4977-9617-1e6c92e83eda name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.704534460Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcbff172-425c-4593-a4bb-113b317eab62 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.704610018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcbff172-425c-4593-a4bb-113b317eab62 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.707560672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec1c153b-72f4-4d74-a1df-e7ee31f213f3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.708631070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725895708603086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec1c153b-72f4-4d74-a1df-e7ee31f213f3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.711804511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86949bb5-c6dd-4b07-8863-f70bdcff5a49 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.711859762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86949bb5-c6dd-4b07-8863-f70bdcff5a49 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.712375899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61e39576b630afc0211e05794511293263fbb1ade63d44ef7cd0cce037ff697,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713725747894942910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 6,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubern
etes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernete
s.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86949bb5-c6dd-4b07-8863-f70bdcff5a49 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.762436286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=264f4eb5-a609-4e05-98ac-5fafcb92aae5 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.762513846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=264f4eb5-a609-4e05-98ac-5fafcb92aae5 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.763743479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d29b40e-64d9-4285-90f9-434068657dff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.764251685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725895764139879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d29b40e-64d9-4285-90f9-434068657dff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.764818394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b962fd76-34fd-435d-9b5c-14c80953db29 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.764875999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b962fd76-34fd-435d-9b5c-14c80953db29 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.765525839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61e39576b630afc0211e05794511293263fbb1ade63d44ef7cd0cce037ff697,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713725747894942910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 6,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubern
etes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernete
s.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b962fd76-34fd-435d-9b5c-14c80953db29 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.815673325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9dd6402b-c4a4-4d12-94f8-dccd9d0b2209 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.815750015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9dd6402b-c4a4-4d12-94f8-dccd9d0b2209 name=/runtime.v1.RuntimeService/Version
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.816944547Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=242c0c66-d754-437c-bc0a-02a2837d3eae name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.817429143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713725895817404262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=242c0c66-d754-437c-bc0a-02a2837d3eae name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.819258425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbd77314-386d-436f-819a-341abfe035fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.819353634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbd77314-386d-436f-819a-341abfe035fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 18:58:15 ha-113226 crio[4059]: time="2024-04-21 18:58:15.819744564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61e39576b630afc0211e05794511293263fbb1ade63d44ef7cd0cce037ff697,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713725747894942910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubernetes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 6,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713725656900579090,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713725627897834735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713725625899834616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ccec9ce1c6a94cf333dd4dbb34a0e72a288d9bf4c390ae75f8f32b62f70e0,PodSandboxId:0c1137d4f8c0a3e69c1bb1bbaac28b20e162ce11cf1f5dcaa38b461c182aa535,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713725617301052132,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernetes.container.hash: 457ab870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a902a7308f45c93f8eed0e1308cdbb5a0f70a1858989221e4a6974f9734388,PodSandboxId:95aebbed003bf1a71e4c87b30d0856edfc909f1665a0c58cf2edc25319e65c16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713725597877647768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e7a110736216d6563f961109716ab1,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38,PodSandboxId:2ea2894bd3019a81a70f2dc6c0e9da229ed4f3dc08438b19b486589316d291c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713725584365378511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861,PodSandboxId:34be5bbca0e6ffc39238635e7269177defd368400da2219e3f7b621d3bff13b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725584101409995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73,PodSandboxId:f92c491180117607046d4cc0aac2e4aa1859aeb1c9fb6476d8ec135b1cce0072,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713725584033495258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c27d6bd33bd9cc85fad97fe1045c356,},Annotations:map[string]string{io.kubernetes.container.hash: af7c5d31,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b,PodSandboxId:b9697b59042c3aa2158c1b6801c6d84588db31db0ee6eebc395c12f985453d13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713725583740283602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d7vgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7958e8c-754e-4550-bb8f-25cf241d9179,},Annotations:map[string]string{io.kubernetes.container.hash: f511fc3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5,PodSandboxId:a8446a0185d7a93e0c7766ec06d9b0b22f4096aa96ccbd664f60874c7d4eeb9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713725583841853559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c,PodSandboxId:37b7204b479e0e01eb8e8335a0b7d15d1c58a15b9b009bc6254431243f97f3ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713725583775222976,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280,PodSandboxId:68d32b69085e4e6cf3245887ff9b78e7f3e670ff6e3b83913f037d84ad0b80a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713725583692589746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bdc0bc9fa11b76392eb37c3eb09966b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff,PodSandboxId:e7a8bf497b3208ed4cd8dd1e49d5b9f48c8999da9a2e061936f9e6cd01f912a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713725578165620509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"c
ontainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906,PodSandboxId:3bfb041480c05a23c41e6c475b7fd3d82ed5c61b4627513a179aaa39fb02c410,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713725578082674301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa37bc69-20f7-416c-9cb7-56430aed3215,},Annotations:map[string]string{io.kubern
etes.container.hash: 9f0d51e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f640c1c70ad368b13d43041d2af3baa8c3ace303c6a9385c85ac1cf317acc0,PodSandboxId:faa43bf489bc59d353ec0d2947eea1ef54a043396a36d274b4dc7a5a35f6b001,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713725076899039403,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-vvhg8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb008f69-72f1-4ab3-a77a-791783889db9,},Annotations:map[string]string{io.kubernete
s.container.hash: 457ab870,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f,PodSandboxId:65ac1d3e43166a5010caa9dd45a9e79e55a89180115505218826af3cf4d5766e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724869256742277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zhskp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4f93e9-bf7f-467f-9fa2-b40ef42ae42f,},Annotations:map[string]string{io.kubernetes.container.hash: d71ce901,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3,PodSandboxId:2607d8484c47e93d43511de91918e1b1d5187e8c1ce02e30ec4d186f07f497e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713724868859636524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-n8sbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d836c4-74bf-4509-8ca9-8d0dea360fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 3938f330,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3,PodSandboxId:68e3a1db8a00beadb553926e6b3c80890017a7419b414a2c0a3fabef3bf2f28b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713724866702422072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h75dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c365aaf4-b083-4247-acd0-cc753abc9f98,},Annotations:map[string]string{io.kubernetes.container.hash: a63838e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab,PodSandboxId:adb821c8b93f8155e201afb188a1e579bc60ea6f9afa6f210caa80813f28f4cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713724846631510377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 278d9f60f07e2ab4168c325829fe7af9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c,PodSandboxId:6167071453e7169a395945371bda4a64d511a29f3de32dc22d43ff954fcb3a32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713724846640123166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-113226,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6896cec9347eeed5c4aeae0852d3a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7f9a7d76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbd77314-386d-436f-819a-341abfe035fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b61e39576b630       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   3bfb041480c05       storage-provisioner
	8c59cf8229cb9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               4                   b9697b59042c3       kindnet-d7vgl
	ac3545ccf2386       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      4 minutes ago       Running             kube-apiserver            3                   f92c491180117       kube-apiserver-ha-113226
	6a975d433ed67       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   68d32b69085e4       kube-controller-manager-ha-113226
	946ccec9ce1c6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   0c1137d4f8c0a       busybox-fc5497c4f-vvhg8
	42a902a7308f4       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   95aebbed003bf       kube-vip-ha-113226
	c3e186bcff628       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   2ea2894bd3019       kube-proxy-h75dp
	0c169617913d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   34be5bbca0e6f       coredns-7db6d8ff4d-n8sbt
	c1f212d6384dd       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   f92c491180117       kube-apiserver-ha-113226
	141b5338a5c5c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   a8446a0185d7a       kube-scheduler-ha-113226
	f77a8f0a05fd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   37b7204b479e0       etcd-ha-113226
	75879d4cf5765       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               3                   b9697b59042c3       kindnet-d7vgl
	2a96397b9fbb4       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   68d32b69085e4       kube-controller-manager-ha-113226
	16da795694ae0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e7a8bf497b320       coredns-7db6d8ff4d-zhskp
	31173f263a910       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   3bfb041480c05       storage-provisioner
	70f640c1c70ad       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   faa43bf489bc5       busybox-fc5497c4f-vvhg8
	3e93f6b05d337       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   65ac1d3e43166       coredns-7db6d8ff4d-zhskp
	0b5d0ab414db7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   2607d8484c47e       coredns-7db6d8ff4d-n8sbt
	7048fade386a1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      17 minutes ago      Exited              kube-proxy                0                   68e3a1db8a00b       kube-proxy-h75dp
	9224faad5a972       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   6167071453e71       etcd-ha-113226
	e5498303bb3f9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      17 minutes ago      Exited              kube-scheduler            0                   adb821c8b93f8       kube-scheduler-ha-113226
	
	
	==> coredns [0b5d0ab414db73822d2fd33857ba5f07e2037728ad905cd85001975619300cd3] <==
	[INFO] 10.244.2.2:36822 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002003347s
	[INFO] 10.244.2.2:41452 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195319s
	[INFO] 10.244.2.2:60508 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074721s
	[INFO] 10.244.0.4:51454 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104908s
	[INFO] 10.244.0.4:57376 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109603s
	[INFO] 10.244.0.4:40827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078308s
	[INFO] 10.244.0.4:47256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153662s
	[INFO] 10.244.0.4:37424 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014403s
	[INFO] 10.244.0.4:57234 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144257s
	[INFO] 10.244.1.2:51901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259177s
	[INFO] 10.244.1.2:44450 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123202s
	[INFO] 10.244.2.2:53556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169239s
	[INFO] 10.244.2.2:42828 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117966s
	[INFO] 10.244.2.2:51827 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137514s
	[INFO] 10.244.0.4:56918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047175s
	[INFO] 10.244.1.2:45608 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118838s
	[INFO] 10.244.1.2:50713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000284967s
	[INFO] 10.244.2.2:58426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000313356s
	[INFO] 10.244.2.2:39340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130525s
	[INFO] 10.244.0.4:58687 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094588s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [0c169617913d13a79863d02be89a0d84b3b57e718376758191f935203b8cb861] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1613568619]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:53:09.665) (total time: 10001ms):
	Trace[1613568619]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:53:19.667)
	Trace[1613568619]: [10.001757051s] [10.001757051s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47800->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47800->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47818->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47818->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [16da795694ae0a3990a25679b958d673c75b696d3ab6d8c2461dd48a7ad79eff] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1932980695]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:53:11.070) (total time: 10001ms):
	Trace[1932980695]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:53:21.072)
	Trace[1932980695]: [10.001891188s] [10.001891188s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1323223527]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:53:13.122) (total time: 10001ms):
	Trace[1323223527]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:53:23.123)
	Trace[1323223527]: [10.001302614s] [10.001302614s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35716->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35716->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [3e93f6b05d33724d8fb0ec5d819ad380903f2c6da8686bcf22ec03311ef2622f] <==
	[INFO] 10.244.1.2:41498 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146514s
	[INFO] 10.244.2.2:54180 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001245595s
	[INFO] 10.244.2.2:56702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118055s
	[INFO] 10.244.2.2:52049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103643s
	[INFO] 10.244.2.2:39892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013318s
	[INFO] 10.244.0.4:50393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617766s
	[INFO] 10.244.0.4:58125 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001163449s
	[INFO] 10.244.1.2:55583 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000370228s
	[INFO] 10.244.1.2:57237 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092539s
	[INFO] 10.244.2.2:42488 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129888s
	[INFO] 10.244.0.4:48460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104891s
	[INFO] 10.244.0.4:35562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112767s
	[INFO] 10.244.0.4:37396 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009448s
	[INFO] 10.244.1.2:40110 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268116s
	[INFO] 10.244.1.2:40165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166492s
	[INFO] 10.244.2.2:45365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014902s
	[INFO] 10.244.2.2:48282 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093124s
	[INFO] 10.244.0.4:43339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000357932s
	[INFO] 10.244.0.4:39537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086381s
	[INFO] 10.244.0.4:33649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093318s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-113226
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_40_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:40:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:40:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:53:50 +0000   Sun, 21 Apr 2024 18:41:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-113226
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f770328f068141e091b6c3dbf4a76488
	  System UUID:                f770328f-0681-41e0-91b6-c3dbf4a76488
	  Boot ID:                    bbf1e5be-35e8-4986-b694-bc173cac60e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vvhg8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-n8sbt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-zhskp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-113226                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-d7vgl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-113226             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-113226    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-h75dp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-113226             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-113226                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 4m30s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-113226 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-113226 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-113226 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-113226 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Warning  ContainerGCFailed        5m21s (x2 over 6m21s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m24s                  node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-113226 event: Registered Node ha-113226 in Controller
	
	
	Name:               ha-113226-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_42_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:42:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:58:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:57:02 +0000   Sun, 21 Apr 2024 18:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:57:02 +0000   Sun, 21 Apr 2024 18:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:57:02 +0000   Sun, 21 Apr 2024 18:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:57:02 +0000   Sun, 21 Apr 2024 18:57:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    ha-113226-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e96ca06000049ab994a1d4c31482f88
	  System UUID:                8e96ca06-0000-49ab-994a-1d4c31482f88
	  Boot ID:                    6038fb53-8a68-423d-9e50-f8692a3f2cf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-djlm5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-113226-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-4hx6j                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-113226-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-113226-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-nsv74                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-113226-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-113226-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-113226-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-113226-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-113226-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-113226-m02 status is now: NodeNotReady
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-113226-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-113226-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-113226-m02 event: Registered Node ha-113226-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-113226-m02 status is now: NodeNotReady
	
	
	Name:               ha-113226-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-113226-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-113226
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T18_45_13_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:45:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-113226-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:55:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:56:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:56:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:56:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Apr 2024 18:55:27 +0000   Sun, 21 Apr 2024 18:56:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-113226-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1d55ce55d9e44738a42ed29cc9f1198
	  System UUID:                c1d55ce5-5d9e-4473-8a42-ed29cc9f1198
	  Boot ID:                    ebc30470-0a8b-487f-94f3-953131eede89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-shdvf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-jkd2l              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-6s6v7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-113226-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-113226-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-113226-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-113226-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m24s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   NodeNotReady             3m44s                  node-controller  Node ha-113226-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-113226-m04 event: Registered Node ha-113226-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-113226-m04 has been rebooted, boot id: ebc30470-0a8b-487f-94f3-953131eede89
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-113226-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-113226-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m49s                  kubelet          Node ha-113226-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m49s                  kubelet          Node ha-113226-m04 status is now: NodeReady
	  Normal   NodeNotReady             109s                   node-controller  Node ha-113226-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.054859] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.200473] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.119933] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.314231] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.898172] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.066212] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.334925] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +1.112693] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.070346] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.082112] kauditd_printk_skb: 40 callbacks suppressed
	[Apr21 18:41] kauditd_printk_skb: 21 callbacks suppressed
	[Apr21 18:43] kauditd_printk_skb: 74 callbacks suppressed
	[Apr21 18:49] kauditd_printk_skb: 1 callbacks suppressed
	[Apr21 18:51] kauditd_printk_skb: 1 callbacks suppressed
	[Apr21 18:52] systemd-fstab-generator[3973]: Ignoring "noauto" option for root device
	[  +0.160575] systemd-fstab-generator[3986]: Ignoring "noauto" option for root device
	[  +0.200169] systemd-fstab-generator[4000]: Ignoring "noauto" option for root device
	[  +0.155198] systemd-fstab-generator[4012]: Ignoring "noauto" option for root device
	[  +0.304935] systemd-fstab-generator[4040]: Ignoring "noauto" option for root device
	[  +8.255385] systemd-fstab-generator[4144]: Ignoring "noauto" option for root device
	[  +0.098018] kauditd_printk_skb: 100 callbacks suppressed
	[Apr21 18:53] kauditd_printk_skb: 32 callbacks suppressed
	[ +14.554895] kauditd_printk_skb: 78 callbacks suppressed
	[ +28.116673] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.455822] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9224faad5a97231a887d1494d807a99007139de4295ddd273632ef31a3cc6f9c] <==
	":"2024-04-21T18:51:15.336251Z","time spent":"809.592598ms","remote":"127.0.0.1:40468","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" limit:10000 "}
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-21T18:51:16.145902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:51:15.343656Z","time spent":"802.241085ms","remote":"127.0.0.1:40312","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/21 18:51:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-21T18:51:16.256755Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"798b9b0ee1342456","rtt":"11.934223ms","error":"dial tcp 192.168.39.233:2380: i/o timeout"}
	{"level":"info","ts":"2024-04-21T18:51:16.26168Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1a622f206f99396a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-21T18:51:16.261905Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.261953Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262005Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262115Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262152Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262296Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.262341Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"798b9b0ee1342456"}
	{"level":"info","ts":"2024-04-21T18:51:16.26235Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262363Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262378Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262465Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262492Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262518Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.262555Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:51:16.265498Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-04-21T18:51:16.265721Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-04-21T18:51:16.26576Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-113226","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"]}
	
	
	==> etcd [f77a8f0a05fd3b187fc9adaad71c1f4c104d424718835f739bc1fff69844aa4c] <==
	{"level":"info","ts":"2024-04-21T18:54:39.170571Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:54:39.170867Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1a622f206f99396a","to":"3968f0b022895f5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-21T18:54:39.170935Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:54:39.229799Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.221:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-21T18:54:39.786481Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:39.786619Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3968f0b022895f5","rtt":"0s","error":"dial tcp 192.168.39.221:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-21T18:54:53.405368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.1555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-21T18:54:53.40559Z","caller":"traceutil/trace.go:171","msg":"trace[264659370] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:2498; }","duration":"125.47486ms","start":"2024-04-21T18:54:53.280077Z","end":"2024-04-21T18:54:53.405552Z","steps":["trace[264659370] 'count revisions from in-memory index tree'  (duration: 123.498298ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:55:41.572929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a switched to configuration voters=(1901133809061542250 8758264388562199638)"}
	{"level":"info","ts":"2024-04-21T18:55:41.575657Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","removed-remote-peer-id":"3968f0b022895f5","removed-remote-peer-urls":["https://192.168.39.221:2380"]}
	{"level":"info","ts":"2024-04-21T18:55:41.576011Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:55:41.581378Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:55:41.585289Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:55:41.586549Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:55:41.586613Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:55:41.586824Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:55:41.587118Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5","error":"context canceled"}
	{"level":"warn","ts":"2024-04-21T18:55:41.587263Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3968f0b022895f5","error":"failed to read 3968f0b022895f5 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-21T18:55:41.587309Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:55:41.592337Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5","error":"context canceled"}
	{"level":"info","ts":"2024-04-21T18:55:41.592417Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1a622f206f99396a","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:55:41.592439Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3968f0b022895f5"}
	{"level":"info","ts":"2024-04-21T18:55:41.592488Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"1a622f206f99396a","removed-remote-peer-id":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:55:41.603015Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"1a622f206f99396a","remote-peer-id-stream-handler":"1a622f206f99396a","remote-peer-id-from":"3968f0b022895f5"}
	{"level":"warn","ts":"2024-04-21T18:55:41.612129Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.221:45852","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:58:16 up 18 min,  0 users,  load average: 0.34, 0.44, 0.39
	Linux ha-113226 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [75879d4cf57654dbbba4bf0da84a33d25e242c941aa26583e166170a9900b91b] <==
	I0421 18:53:04.516487       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0421 18:53:14.828806       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0421 18:53:16.675774       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0421 18:53:19.747937       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0421 18:53:26.385067       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.207:46190->10.96.0.1:443: read: connection reset by peer
	I0421 18:53:32.035860       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [8c59cf8229cb91efce141b8fbd4b8d02d74f117cca4af0a3f0c580332753873b] <==
	I0421 18:57:28.602116       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:57:38.619458       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:57:38.619545       1 main.go:227] handling current node
	I0421 18:57:38.619579       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:57:38.619598       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:57:38.619707       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:57:38.619726       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:57:48.627303       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:57:48.627368       1 main.go:227] handling current node
	I0421 18:57:48.627386       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:57:48.627396       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:57:48.627674       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:57:48.627722       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:57:58.643467       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:57:58.643512       1 main.go:227] handling current node
	I0421 18:57:58.643523       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:57:58.643529       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:57:58.643679       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:57:58.643764       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	I0421 18:58:08.650372       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0421 18:58:08.650578       1 main.go:227] handling current node
	I0421 18:58:08.650609       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0421 18:58:08.650628       1 main.go:250] Node ha-113226-m02 has CIDR [10.244.1.0/24] 
	I0421 18:58:08.650746       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0421 18:58:08.650766       1 main.go:250] Node ha-113226-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ac3545ccf2386edba85345ae867650eec607043168143686f6669cd79ea3b27d] <==
	I0421 18:53:49.845528       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0421 18:53:49.934346       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0421 18:53:49.934418       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0421 18:53:49.935402       1 shared_informer.go:320] Caches are synced for configmaps
	I0421 18:53:49.935502       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0421 18:53:49.936028       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 18:53:49.936576       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 18:53:49.942679       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0421 18:53:49.943004       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0421 18:53:49.944918       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0421 18:53:49.944973       1 aggregator.go:165] initial CRD sync complete...
	I0421 18:53:49.944988       1 autoregister_controller.go:141] Starting autoregister controller
	I0421 18:53:49.944993       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0421 18:53:49.944998       1 cache.go:39] Caches are synced for autoregister controller
	W0421 18:53:49.946683       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.221 192.168.39.233]
	I0421 18:53:49.946992       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 18:53:49.947037       1 policy_source.go:224] refreshing policies
	I0421 18:53:49.948234       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 18:53:49.956690       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0421 18:53:49.960771       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0421 18:53:49.991459       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 18:53:50.846981       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0421 18:53:51.380893       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.221 192.168.39.233 192.168.39.60]
	W0421 18:54:01.383063       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.233 192.168.39.60]
	W0421 18:56:01.389077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.233 192.168.39.60]
	
	
	==> kube-apiserver [c1f212d6384dd448b66fc7fe94f2e7fc770668501046a6790d297d3a66b1ef73] <==
	I0421 18:53:04.736016       1 options.go:221] external host was not specified, using 192.168.39.60
	I0421 18:53:04.738855       1 server.go:148] Version: v1.30.0
	I0421 18:53:04.738993       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:53:05.362097       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0421 18:53:05.373570       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 18:53:05.378116       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0421 18:53:05.378159       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0421 18:53:05.378409       1 instance.go:299] Using reconciler: lease
	W0421 18:53:25.362136       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0421 18:53:25.362136       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0421 18:53:25.379099       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2a96397b9fbb4485740df18f543cfb6f47fa808494f021a25e421b9c7c293280] <==
	I0421 18:53:05.747129       1 serving.go:380] Generated self-signed cert in-memory
	I0421 18:53:06.388486       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0421 18:53:06.388539       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:53:06.390899       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0421 18:53:06.391051       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0421 18:53:06.391669       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0421 18:53:06.391760       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0421 18:53:26.395695       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.60:8443/healthz\": dial tcp 192.168.39.60:8443: connect: connection refused"
	
	
	==> kube-controller-manager [6a975d433ed67425e877f0d71e2fd1b4a51030d86953f1180ef669707badf3c8] <==
	E0421 18:56:28.007634       1 daemon_controller.go:324] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"72fd541a-4036-4ecf-943c-3cf885013395", ResourceVersion:"2621", Generation:1, CreationTimestamp:time.Date(2024, time.April, 21, 18, 40, 55, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0021e3a00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January,
1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerV
olumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0012cadc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a95b60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFS
VolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSourc
e:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a95b78), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPe
rsistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.30.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021e3a40)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"
kube-proxy", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001c20240), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralC
ontainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0000ec228), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002478200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.Preemptio
nPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00232b0d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0000ec290)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0421 18:56:28.053860       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.203803ms"
	I0421 18:56:28.054332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.126µs"
	E0421 18:56:42.739292       1 gc_controller.go:153] "Failed to get node" err="node \"ha-113226-m03\" not found" logger="pod-garbage-collector-controller" node="ha-113226-m03"
	E0421 18:56:42.739346       1 gc_controller.go:153] "Failed to get node" err="node \"ha-113226-m03\" not found" logger="pod-garbage-collector-controller" node="ha-113226-m03"
	E0421 18:56:42.739354       1 gc_controller.go:153] "Failed to get node" err="node \"ha-113226-m03\" not found" logger="pod-garbage-collector-controller" node="ha-113226-m03"
	E0421 18:56:42.739359       1 gc_controller.go:153] "Failed to get node" err="node \"ha-113226-m03\" not found" logger="pod-garbage-collector-controller" node="ha-113226-m03"
	E0421 18:56:42.739364       1 gc_controller.go:153] "Failed to get node" err="node \"ha-113226-m03\" not found" logger="pod-garbage-collector-controller" node="ha-113226-m03"
	I0421 18:56:42.752916       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-113226-m03"
	I0421 18:56:42.798283       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-113226-m03"
	I0421 18:56:42.798458       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-113226-m03"
	I0421 18:56:42.840472       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-113226-m03"
	I0421 18:56:42.840630       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-113226-m03"
	I0421 18:56:42.876073       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-113226-m03"
	I0421 18:56:42.876523       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-113226-m03"
	I0421 18:56:42.915969       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-113226-m03"
	I0421 18:56:42.916028       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rhmbs"
	I0421 18:56:42.944276       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rhmbs"
	I0421 18:56:42.944510       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-shlwr"
	I0421 18:56:42.974744       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-shlwr"
	I0421 18:56:42.974798       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-113226-m03"
	I0421 18:56:43.006684       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-113226-m03"
	I0421 18:57:02.551351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.596488ms"
	I0421 18:57:02.552389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="274.854µs"
	
	
	==> kube-proxy [7048fade386a19e2874002bea6be590e9d7164f0b3322a9798a12d8def4d90d3] <==
	W0421 18:50:07.300572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:07.300616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:07.300639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:13.443887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:13.445246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:13.445849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:13.445922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:13.445416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:13.446239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:22.662409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:22.662544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:22.662749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:22.662810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:25.733558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:25.733625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:38.021083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:38.021389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:41.093707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:41.094509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:50:47.237655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:50:47.237772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:51:14.883908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:51:14.884046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1915": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:51:14.884148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:51:14.884243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1967": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [c3e186bcff6286a8a7651fa19ecbe34b092ef66440ee40768f3311f6abc76f38] <==
	E0421 18:53:26.979851       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-113226\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0421 18:53:45.412525       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-113226\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0421 18:53:45.412667       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0421 18:53:45.456125       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:53:45.456329       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:53:45.456361       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:53:45.459434       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:53:45.459880       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:53:45.460268       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:53:45.461898       1 config.go:192] "Starting service config controller"
	I0421 18:53:45.462151       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:53:45.462390       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:53:45.462423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:53:45.463158       1 config.go:319] "Starting node config controller"
	I0421 18:53:45.463289       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0421 18:53:48.486103       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0421 18:53:48.486646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:53:48.487097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:53:48.487800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:53:48.488436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0421 18:53:48.488717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0421 18:53:48.488922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-113226&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0421 18:53:50.064155       1 shared_informer.go:320] Caches are synced for node config
	I0421 18:53:50.064239       1 shared_informer.go:320] Caches are synced for service config
	I0421 18:53:50.064288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [141b5338a5c5cf53042c915197b14b6815bedb1872e85583950ac6d2aee4f8e5] <==
	W0421 18:53:44.167053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.167136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.215021       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.215096       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.358691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.358889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:44.677385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:44.677530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:45.304752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:45.304823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:45.844318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:45.844356       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:46.740958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:46.741028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:47.158760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:47.158904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:47.500823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:47.500925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	W0421 18:53:47.919561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0421 18:53:47.919595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	I0421 18:53:59.398524       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0421 18:55:38.242569       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-shdvf\": pod busybox-fc5497c4f-shdvf is already assigned to node \"ha-113226-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-shdvf" node="ha-113226-m04"
	E0421 18:55:38.243476       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 48750b1c-827f-46f6-a787-96027c68c5fd(default/busybox-fc5497c4f-shdvf) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-shdvf"
	E0421 18:55:38.243635       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-shdvf\": pod busybox-fc5497c4f-shdvf is already assigned to node \"ha-113226-m04\"" pod="default/busybox-fc5497c4f-shdvf"
	I0421 18:55:38.246916       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-shdvf" node="ha-113226-m04"
	
	
	==> kube-scheduler [e5498303bb3f96feab6681531029d1842c7dc42557a4d46757c2294b8886d7ab] <==
	W0421 18:51:13.168618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 18:51:13.168738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 18:51:13.426700       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 18:51:13.426799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 18:51:13.624431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 18:51:13.624552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 18:51:14.004476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 18:51:14.004602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 18:51:14.070072       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 18:51:14.070244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 18:51:14.131630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 18:51:14.131761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 18:51:14.215390       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 18:51:14.215542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 18:51:14.290145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 18:51:14.290281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 18:51:14.327916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 18:51:14.327982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 18:51:14.387365       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 18:51:14.387450       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 18:51:15.202362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 18:51:15.202466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0421 18:51:16.093374       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0421 18:51:16.093525       1 run.go:74] "command failed" err="finished without leader elect"
	I0421 18:51:16.093603       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	
	==> kubelet <==
	Apr 21 18:54:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:54:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:55:05 ha-113226 kubelet[1377]: I0421 18:55:05.882018    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:55:05 ha-113226 kubelet[1377]: E0421 18:55:05.882528    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:55:20 ha-113226 kubelet[1377]: I0421 18:55:20.880477    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:55:20 ha-113226 kubelet[1377]: E0421 18:55:20.880780    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:55:35 ha-113226 kubelet[1377]: I0421 18:55:35.881759    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:55:35 ha-113226 kubelet[1377]: E0421 18:55:35.882336    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa37bc69-20f7-416c-9cb7-56430aed3215)\"" pod="kube-system/storage-provisioner" podUID="aa37bc69-20f7-416c-9cb7-56430aed3215"
	Apr 21 18:55:47 ha-113226 kubelet[1377]: I0421 18:55:47.880724    1377 scope.go:117] "RemoveContainer" containerID="31173f263a910033239b924165c92bb85b705c75531302539f158b783173d906"
	Apr 21 18:55:48 ha-113226 kubelet[1377]: I0421 18:55:48.642044    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-113226" podStartSLOduration=76.642007616 podStartE2EDuration="1m16.642007616s" podCreationTimestamp="2024-04-21 18:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-21 18:54:35.904122723 +0000 UTC m=+820.169727907" watchObservedRunningTime="2024-04-21 18:55:48.642007616 +0000 UTC m=+892.907612800"
	Apr 21 18:55:55 ha-113226 kubelet[1377]: E0421 18:55:55.932636    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:55:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:55:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:55:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:55:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:56:55 ha-113226 kubelet[1377]: E0421 18:56:55.927651    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:56:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:56:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:56:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:56:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:57:55 ha-113226 kubelet[1377]: E0421 18:57:55.927706    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:57:55 ha-113226 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:57:55 ha-113226 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:57:55 ha-113226 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:57:55 ha-113226 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 18:58:15.373861   31131 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18702-3854/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-113226 -n ha-113226
helpers_test.go:261: (dbg) Run:  kubectl --context ha-113226 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (308.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-860427
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-860427
E0421 19:14:06.204824   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-860427: exit status 82 (2m2.709203154s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-860427-m03"  ...
	* Stopping node "multinode-860427-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-860427" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860427 --wait=true -v=8 --alsologtostderr
E0421 19:16:09.209979   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 19:17:09.250397   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860427 --wait=true -v=8 --alsologtostderr: (3m3.016096339s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-860427
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-860427 -n multinode-860427
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-860427 logs -n 25: (1.641246978s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2829491611/001/cp-test_multinode-860427-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427:/home/docker/cp-test_multinode-860427-m02_multinode-860427.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427 sudo cat                                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m02_multinode-860427.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03:/home/docker/cp-test_multinode-860427-m02_multinode-860427-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427-m03 sudo cat                                   | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m02_multinode-860427-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp testdata/cp-test.txt                                                | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2829491611/001/cp-test_multinode-860427-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427:/home/docker/cp-test_multinode-860427-m03_multinode-860427.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427 sudo cat                                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m03_multinode-860427.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02:/home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427-m02 sudo cat                                   | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-860427 node stop m03                                                          | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	| node    | multinode-860427 node start                                                             | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-860427                                                                | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC |                     |
	| stop    | -p multinode-860427                                                                     | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC |                     |
	| start   | -p multinode-860427                                                                     | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:15 UTC | 21 Apr 24 19:18 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-860427                                                                | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:18 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:15:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:15:50.735473   40508 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:15:50.735592   40508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:15:50.735600   40508 out.go:304] Setting ErrFile to fd 2...
	I0421 19:15:50.735605   40508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:15:50.735814   40508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:15:50.736327   40508 out.go:298] Setting JSON to false
	I0421 19:15:50.737240   40508 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3449,"bootTime":1713723502,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:15:50.737294   40508 start.go:139] virtualization: kvm guest
	I0421 19:15:50.740312   40508 out.go:177] * [multinode-860427] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:15:50.741826   40508 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:15:50.743352   40508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:15:50.741795   40508 notify.go:220] Checking for updates...
	I0421 19:15:50.744734   40508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:15:50.746182   40508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:15:50.747383   40508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:15:50.748598   40508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:15:50.750218   40508 config.go:182] Loaded profile config "multinode-860427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:15:50.750296   40508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:15:50.750674   40508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:15:50.750712   40508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:15:50.765693   40508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0421 19:15:50.766109   40508 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:15:50.766690   40508 main.go:141] libmachine: Using API Version  1
	I0421 19:15:50.766715   40508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:15:50.767003   40508 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:15:50.767208   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:15:50.802725   40508 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:15:50.804139   40508 start.go:297] selected driver: kvm2
	I0421 19:15:50.804156   40508 start.go:901] validating driver "kvm2" against &{Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNam
e:multinode-860427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:15:50.804289   40508 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:15:50.804585   40508 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:15:50.804679   40508 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:15:50.821726   40508 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:15:50.822469   40508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:15:50.822537   40508 cni.go:84] Creating CNI manager for ""
	I0421 19:15:50.822550   40508 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 19:15:50.822614   40508 start.go:340] cluster config:
	{Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-860427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kube
virt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:15:50.822750   40508 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:15:50.824753   40508 out.go:177] * Starting "multinode-860427" primary control-plane node in "multinode-860427" cluster
	I0421 19:15:50.826111   40508 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:15:50.826157   40508 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:15:50.826168   40508 cache.go:56] Caching tarball of preloaded images
	I0421 19:15:50.826243   40508 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:15:50.826257   40508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 19:15:50.826385   40508 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/config.json ...
	I0421 19:15:50.826575   40508 start.go:360] acquireMachinesLock for multinode-860427: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:15:50.826621   40508 start.go:364] duration metric: took 26.502µs to acquireMachinesLock for "multinode-860427"
	I0421 19:15:50.826654   40508 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:15:50.826662   40508 fix.go:54] fixHost starting: 
	I0421 19:15:50.826931   40508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:15:50.826968   40508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:15:50.841411   40508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0421 19:15:50.841853   40508 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:15:50.842342   40508 main.go:141] libmachine: Using API Version  1
	I0421 19:15:50.842360   40508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:15:50.842708   40508 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:15:50.842917   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:15:50.843078   40508 main.go:141] libmachine: (multinode-860427) Calling .GetState
	I0421 19:15:50.844802   40508 fix.go:112] recreateIfNeeded on multinode-860427: state=Running err=<nil>
	W0421 19:15:50.844820   40508 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:15:50.847716   40508 out.go:177] * Updating the running kvm2 "multinode-860427" VM ...
	I0421 19:15:50.849140   40508 machine.go:94] provisionDockerMachine start ...
	I0421 19:15:50.849158   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:15:50.849408   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:50.852110   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.852577   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:50.852610   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.852752   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:50.852950   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.853114   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.853255   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:50.853417   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:50.853591   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:50.853600   40508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:15:50.972348   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860427
	
	I0421 19:15:50.972377   40508 main.go:141] libmachine: (multinode-860427) Calling .GetMachineName
	I0421 19:15:50.972598   40508 buildroot.go:166] provisioning hostname "multinode-860427"
	I0421 19:15:50.972620   40508 main.go:141] libmachine: (multinode-860427) Calling .GetMachineName
	I0421 19:15:50.972821   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:50.975591   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.975942   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:50.975973   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.976119   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:50.976333   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.976503   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.976701   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:50.976882   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:50.977090   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:50.977113   40508 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-860427 && echo "multinode-860427" | sudo tee /etc/hostname
	I0421 19:15:51.113207   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860427
	
	I0421 19:15:51.113245   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.116061   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.116480   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.116511   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.116715   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:51.116928   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.117084   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.117187   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:51.117354   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:51.117533   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:51.117556   40508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-860427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-860427/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-860427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:15:51.231288   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:15:51.231319   40508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:15:51.231353   40508 buildroot.go:174] setting up certificates
	I0421 19:15:51.231360   40508 provision.go:84] configureAuth start
	I0421 19:15:51.231377   40508 main.go:141] libmachine: (multinode-860427) Calling .GetMachineName
	I0421 19:15:51.231692   40508 main.go:141] libmachine: (multinode-860427) Calling .GetIP
	I0421 19:15:51.234207   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.234570   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.234599   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.234672   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.236672   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.237046   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.237080   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.237200   40508 provision.go:143] copyHostCerts
	I0421 19:15:51.237230   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:15:51.237265   40508 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:15:51.237280   40508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:15:51.237352   40508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:15:51.237423   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:15:51.237440   40508 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:15:51.237450   40508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:15:51.237482   40508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:15:51.237533   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:15:51.237555   40508 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:15:51.237562   40508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:15:51.237599   40508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:15:51.237660   40508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.multinode-860427 san=[127.0.0.1 192.168.39.100 localhost minikube multinode-860427]
	I0421 19:15:51.285371   40508 provision.go:177] copyRemoteCerts
	I0421 19:15:51.285438   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:15:51.285467   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.288066   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.288392   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.288418   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.288637   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:51.288847   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.289003   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:51.289115   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:15:51.379244   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 19:15:51.379312   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:15:51.410219   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 19:15:51.410287   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0421 19:15:51.439796   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 19:15:51.439876   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:15:51.468435   40508 provision.go:87] duration metric: took 237.058228ms to configureAuth
	I0421 19:15:51.468469   40508 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:15:51.468744   40508 config.go:182] Loaded profile config "multinode-860427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:15:51.468835   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.471459   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.471840   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.471858   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.472136   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:51.472342   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.472479   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.472557   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:51.472706   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:51.472916   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:51.472945   40508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:17:22.394548   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:17:22.394577   40508 machine.go:97] duration metric: took 1m31.545425039s to provisionDockerMachine
	I0421 19:17:22.394593   40508 start.go:293] postStartSetup for "multinode-860427" (driver="kvm2")
	I0421 19:17:22.394610   40508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:17:22.394652   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.394989   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:17:22.395021   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.397869   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.398438   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.398468   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.398658   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.398841   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.398986   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.399092   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:17:22.492752   40508 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:17:22.497972   40508 command_runner.go:130] > NAME=Buildroot
	I0421 19:17:22.497993   40508 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 19:17:22.497999   40508 command_runner.go:130] > ID=buildroot
	I0421 19:17:22.498006   40508 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 19:17:22.498012   40508 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 19:17:22.498383   40508 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:17:22.498408   40508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:17:22.498485   40508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:17:22.498561   40508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:17:22.498572   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 19:17:22.498651   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:17:22.510667   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:17:22.539183   40508 start.go:296] duration metric: took 144.574371ms for postStartSetup
	I0421 19:17:22.539229   40508 fix.go:56] duration metric: took 1m31.712567169s for fixHost
	I0421 19:17:22.539250   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.541978   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.542361   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.542395   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.542625   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.542860   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.543052   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.543190   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.543374   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:17:22.543594   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:17:22.543608   40508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:17:22.659572   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713727042.641962267
	
	I0421 19:17:22.659594   40508 fix.go:216] guest clock: 1713727042.641962267
	I0421 19:17:22.659608   40508 fix.go:229] Guest: 2024-04-21 19:17:22.641962267 +0000 UTC Remote: 2024-04-21 19:17:22.539233659 +0000 UTC m=+91.849775258 (delta=102.728608ms)
	I0421 19:17:22.659655   40508 fix.go:200] guest clock delta is within tolerance: 102.728608ms
	I0421 19:17:22.659666   40508 start.go:83] releasing machines lock for "multinode-860427", held for 1m31.833032579s
	I0421 19:17:22.659692   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.659962   40508 main.go:141] libmachine: (multinode-860427) Calling .GetIP
	I0421 19:17:22.662726   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.663162   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.663193   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.663400   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.663901   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.664077   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.664166   40508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:17:22.664209   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.664309   40508 ssh_runner.go:195] Run: cat /version.json
	I0421 19:17:22.664335   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.666564   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.666864   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.666898   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.667034   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.667064   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.667286   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.667399   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.667428   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.667436   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.667540   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:17:22.667662   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.667808   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.667920   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.668044   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:17:22.779728   40508 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 19:17:22.780610   40508 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0421 19:17:22.780748   40508 ssh_runner.go:195] Run: systemctl --version
	I0421 19:17:22.787687   40508 command_runner.go:130] > systemd 252 (252)
	I0421 19:17:22.787764   40508 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0421 19:17:22.787838   40508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:17:22.953485   40508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 19:17:22.960600   40508 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0421 19:17:22.960937   40508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:17:22.961006   40508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:17:22.971756   40508 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0421 19:17:22.971780   40508 start.go:494] detecting cgroup driver to use...
	I0421 19:17:22.971844   40508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:17:22.991058   40508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:17:23.007285   40508 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:17:23.007343   40508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:17:23.022828   40508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:17:23.037651   40508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:17:23.194461   40508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:17:23.341107   40508 docker.go:233] disabling docker service ...
	I0421 19:17:23.341184   40508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:17:23.358370   40508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:17:23.373137   40508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:17:23.527071   40508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:17:23.669898   40508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:17:23.685705   40508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:17:23.709125   40508 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0421 19:17:23.709171   40508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 19:17:23.709219   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.721570   40508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:17:23.721646   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.733308   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.745568   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.757324   40508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:17:23.768959   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.780287   40508 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.793212   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.804395   40508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:17:23.814155   40508 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 19:17:23.814291   40508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:17:23.824111   40508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:17:23.967925   40508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:17:24.215968   40508 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:17:24.216039   40508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:17:24.222284   40508 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0421 19:17:24.222303   40508 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 19:17:24.222310   40508 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0421 19:17:24.222316   40508 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 19:17:24.222322   40508 command_runner.go:130] > Access: 2024-04-21 19:17:24.167203513 +0000
	I0421 19:17:24.222329   40508 command_runner.go:130] > Modify: 2024-04-21 19:17:24.089200083 +0000
	I0421 19:17:24.222335   40508 command_runner.go:130] > Change: 2024-04-21 19:17:24.089200083 +0000
	I0421 19:17:24.222339   40508 command_runner.go:130] >  Birth: -
	I0421 19:17:24.222702   40508 start.go:562] Will wait 60s for crictl version
	I0421 19:17:24.222764   40508 ssh_runner.go:195] Run: which crictl
	I0421 19:17:24.227359   40508 command_runner.go:130] > /usr/bin/crictl
	I0421 19:17:24.227708   40508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:17:24.270722   40508 command_runner.go:130] > Version:  0.1.0
	I0421 19:17:24.270755   40508 command_runner.go:130] > RuntimeName:  cri-o
	I0421 19:17:24.270760   40508 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0421 19:17:24.270766   40508 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 19:17:24.270978   40508 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:17:24.271067   40508 ssh_runner.go:195] Run: crio --version
	I0421 19:17:24.301568   40508 command_runner.go:130] > crio version 1.29.1
	I0421 19:17:24.301599   40508 command_runner.go:130] > Version:        1.29.1
	I0421 19:17:24.301613   40508 command_runner.go:130] > GitCommit:      unknown
	I0421 19:17:24.301620   40508 command_runner.go:130] > GitCommitDate:  unknown
	I0421 19:17:24.301625   40508 command_runner.go:130] > GitTreeState:   clean
	I0421 19:17:24.301634   40508 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0421 19:17:24.301639   40508 command_runner.go:130] > GoVersion:      go1.21.6
	I0421 19:17:24.301645   40508 command_runner.go:130] > Compiler:       gc
	I0421 19:17:24.301651   40508 command_runner.go:130] > Platform:       linux/amd64
	I0421 19:17:24.301658   40508 command_runner.go:130] > Linkmode:       dynamic
	I0421 19:17:24.301665   40508 command_runner.go:130] > BuildTags:      
	I0421 19:17:24.301672   40508 command_runner.go:130] >   containers_image_ostree_stub
	I0421 19:17:24.301683   40508 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0421 19:17:24.301693   40508 command_runner.go:130] >   btrfs_noversion
	I0421 19:17:24.301701   40508 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0421 19:17:24.301710   40508 command_runner.go:130] >   libdm_no_deferred_remove
	I0421 19:17:24.301717   40508 command_runner.go:130] >   seccomp
	I0421 19:17:24.301725   40508 command_runner.go:130] > LDFlags:          unknown
	I0421 19:17:24.301735   40508 command_runner.go:130] > SeccompEnabled:   true
	I0421 19:17:24.301742   40508 command_runner.go:130] > AppArmorEnabled:  false
	I0421 19:17:24.303166   40508 ssh_runner.go:195] Run: crio --version
	I0421 19:17:24.336877   40508 command_runner.go:130] > crio version 1.29.1
	I0421 19:17:24.336901   40508 command_runner.go:130] > Version:        1.29.1
	I0421 19:17:24.336909   40508 command_runner.go:130] > GitCommit:      unknown
	I0421 19:17:24.336916   40508 command_runner.go:130] > GitCommitDate:  unknown
	I0421 19:17:24.336922   40508 command_runner.go:130] > GitTreeState:   clean
	I0421 19:17:24.336931   40508 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0421 19:17:24.336937   40508 command_runner.go:130] > GoVersion:      go1.21.6
	I0421 19:17:24.336943   40508 command_runner.go:130] > Compiler:       gc
	I0421 19:17:24.336951   40508 command_runner.go:130] > Platform:       linux/amd64
	I0421 19:17:24.336958   40508 command_runner.go:130] > Linkmode:       dynamic
	I0421 19:17:24.336964   40508 command_runner.go:130] > BuildTags:      
	I0421 19:17:24.336973   40508 command_runner.go:130] >   containers_image_ostree_stub
	I0421 19:17:24.336980   40508 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0421 19:17:24.336987   40508 command_runner.go:130] >   btrfs_noversion
	I0421 19:17:24.336994   40508 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0421 19:17:24.337008   40508 command_runner.go:130] >   libdm_no_deferred_remove
	I0421 19:17:24.337014   40508 command_runner.go:130] >   seccomp
	I0421 19:17:24.337018   40508 command_runner.go:130] > LDFlags:          unknown
	I0421 19:17:24.337022   40508 command_runner.go:130] > SeccompEnabled:   true
	I0421 19:17:24.337028   40508 command_runner.go:130] > AppArmorEnabled:  false
	I0421 19:17:24.339040   40508 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 19:17:24.340427   40508 main.go:141] libmachine: (multinode-860427) Calling .GetIP
	I0421 19:17:24.342869   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:24.343165   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:24.343186   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:24.343378   40508 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 19:17:24.348591   40508 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0421 19:17:24.348688   40508 kubeadm.go:877] updating cluster {Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-860
427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false ist
io:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:17:24.348824   40508 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:17:24.348865   40508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:17:24.400609   40508 command_runner.go:130] > {
	I0421 19:17:24.400630   40508 command_runner.go:130] >   "images": [
	I0421 19:17:24.400635   40508 command_runner.go:130] >     {
	I0421 19:17:24.400643   40508 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0421 19:17:24.400648   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400657   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0421 19:17:24.400663   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400671   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400687   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0421 19:17:24.400701   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0421 19:17:24.400710   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400720   40508 command_runner.go:130] >       "size": "65291810",
	I0421 19:17:24.400726   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.400735   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.400755   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.400770   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.400776   40508 command_runner.go:130] >     },
	I0421 19:17:24.400782   40508 command_runner.go:130] >     {
	I0421 19:17:24.400790   40508 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0421 19:17:24.400798   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400803   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0421 19:17:24.400808   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400813   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400823   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0421 19:17:24.400838   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0421 19:17:24.400844   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400849   40508 command_runner.go:130] >       "size": "1363676",
	I0421 19:17:24.400855   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.400865   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.400871   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.400875   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.400879   40508 command_runner.go:130] >     },
	I0421 19:17:24.400882   40508 command_runner.go:130] >     {
	I0421 19:17:24.400888   40508 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0421 19:17:24.400894   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400899   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0421 19:17:24.400903   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400907   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400915   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0421 19:17:24.400924   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0421 19:17:24.400930   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400936   40508 command_runner.go:130] >       "size": "31470524",
	I0421 19:17:24.400942   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.400946   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.400951   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.400955   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.400962   40508 command_runner.go:130] >     },
	I0421 19:17:24.400965   40508 command_runner.go:130] >     {
	I0421 19:17:24.400970   40508 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0421 19:17:24.400977   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400981   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0421 19:17:24.400985   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400989   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400998   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0421 19:17:24.401014   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0421 19:17:24.401020   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401025   40508 command_runner.go:130] >       "size": "61245718",
	I0421 19:17:24.401030   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.401035   40508 command_runner.go:130] >       "username": "nonroot",
	I0421 19:17:24.401041   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401046   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401052   40508 command_runner.go:130] >     },
	I0421 19:17:24.401056   40508 command_runner.go:130] >     {
	I0421 19:17:24.401062   40508 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0421 19:17:24.401066   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401071   40508 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0421 19:17:24.401077   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401081   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401088   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0421 19:17:24.401097   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0421 19:17:24.401101   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401105   40508 command_runner.go:130] >       "size": "150779692",
	I0421 19:17:24.401110   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401115   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401119   40508 command_runner.go:130] >       },
	I0421 19:17:24.401126   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401130   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401134   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401138   40508 command_runner.go:130] >     },
	I0421 19:17:24.401143   40508 command_runner.go:130] >     {
	I0421 19:17:24.401149   40508 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0421 19:17:24.401155   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401160   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0421 19:17:24.401164   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401168   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401175   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0421 19:17:24.401184   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0421 19:17:24.401188   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401192   40508 command_runner.go:130] >       "size": "117609952",
	I0421 19:17:24.401196   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401200   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401203   40508 command_runner.go:130] >       },
	I0421 19:17:24.401207   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401211   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401214   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401219   40508 command_runner.go:130] >     },
	I0421 19:17:24.401224   40508 command_runner.go:130] >     {
	I0421 19:17:24.401232   40508 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0421 19:17:24.401236   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401248   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0421 19:17:24.401254   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401257   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401265   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0421 19:17:24.401275   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0421 19:17:24.401278   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401282   40508 command_runner.go:130] >       "size": "112170310",
	I0421 19:17:24.401285   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401289   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401292   40508 command_runner.go:130] >       },
	I0421 19:17:24.401296   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401301   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401304   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401310   40508 command_runner.go:130] >     },
	I0421 19:17:24.401313   40508 command_runner.go:130] >     {
	I0421 19:17:24.401320   40508 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0421 19:17:24.401325   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401330   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0421 19:17:24.401333   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401338   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401351   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0421 19:17:24.401361   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0421 19:17:24.401364   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401368   40508 command_runner.go:130] >       "size": "85932953",
	I0421 19:17:24.401375   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.401381   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401385   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401389   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401393   40508 command_runner.go:130] >     },
	I0421 19:17:24.401396   40508 command_runner.go:130] >     {
	I0421 19:17:24.401402   40508 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0421 19:17:24.401405   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401410   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0421 19:17:24.401414   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401418   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401425   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0421 19:17:24.401431   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0421 19:17:24.401434   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401438   40508 command_runner.go:130] >       "size": "63026502",
	I0421 19:17:24.401441   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401445   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401448   40508 command_runner.go:130] >       },
	I0421 19:17:24.401451   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401455   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401459   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401462   40508 command_runner.go:130] >     },
	I0421 19:17:24.401465   40508 command_runner.go:130] >     {
	I0421 19:17:24.401470   40508 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0421 19:17:24.401474   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401478   40508 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0421 19:17:24.401481   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401486   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401493   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0421 19:17:24.401499   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0421 19:17:24.401502   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401506   40508 command_runner.go:130] >       "size": "750414",
	I0421 19:17:24.401510   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401514   40508 command_runner.go:130] >         "value": "65535"
	I0421 19:17:24.401518   40508 command_runner.go:130] >       },
	I0421 19:17:24.401522   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401526   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401532   40508 command_runner.go:130] >       "pinned": true
	I0421 19:17:24.401535   40508 command_runner.go:130] >     }
	I0421 19:17:24.401538   40508 command_runner.go:130] >   ]
	I0421 19:17:24.401541   40508 command_runner.go:130] > }
	I0421 19:17:24.401698   40508 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 19:17:24.401708   40508 crio.go:433] Images already preloaded, skipping extraction
	I0421 19:17:24.401750   40508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:17:24.438514   40508 command_runner.go:130] > {
	I0421 19:17:24.438537   40508 command_runner.go:130] >   "images": [
	I0421 19:17:24.438542   40508 command_runner.go:130] >     {
	I0421 19:17:24.438549   40508 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0421 19:17:24.438554   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438560   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0421 19:17:24.438565   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438570   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438580   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0421 19:17:24.438587   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0421 19:17:24.438592   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438597   40508 command_runner.go:130] >       "size": "65291810",
	I0421 19:17:24.438601   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438606   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.438621   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438628   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438632   40508 command_runner.go:130] >     },
	I0421 19:17:24.438635   40508 command_runner.go:130] >     {
	I0421 19:17:24.438641   40508 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0421 19:17:24.438645   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438653   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0421 19:17:24.438657   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438666   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438678   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0421 19:17:24.438693   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0421 19:17:24.438701   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438707   40508 command_runner.go:130] >       "size": "1363676",
	I0421 19:17:24.438715   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438725   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.438734   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438744   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438751   40508 command_runner.go:130] >     },
	I0421 19:17:24.438756   40508 command_runner.go:130] >     {
	I0421 19:17:24.438769   40508 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0421 19:17:24.438778   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438788   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0421 19:17:24.438797   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438803   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438818   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0421 19:17:24.438829   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0421 19:17:24.438835   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438841   40508 command_runner.go:130] >       "size": "31470524",
	I0421 19:17:24.438848   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438852   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.438859   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438862   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438868   40508 command_runner.go:130] >     },
	I0421 19:17:24.438872   40508 command_runner.go:130] >     {
	I0421 19:17:24.438880   40508 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0421 19:17:24.438886   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438891   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0421 19:17:24.438897   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438901   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438910   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0421 19:17:24.438922   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0421 19:17:24.438928   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438932   40508 command_runner.go:130] >       "size": "61245718",
	I0421 19:17:24.438938   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438942   40508 command_runner.go:130] >       "username": "nonroot",
	I0421 19:17:24.438953   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438959   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438962   40508 command_runner.go:130] >     },
	I0421 19:17:24.438966   40508 command_runner.go:130] >     {
	I0421 19:17:24.438975   40508 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0421 19:17:24.438981   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438986   40508 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0421 19:17:24.438992   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438996   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439005   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0421 19:17:24.439014   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0421 19:17:24.439019   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439024   40508 command_runner.go:130] >       "size": "150779692",
	I0421 19:17:24.439029   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439033   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439039   40508 command_runner.go:130] >       },
	I0421 19:17:24.439043   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439050   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439054   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439061   40508 command_runner.go:130] >     },
	I0421 19:17:24.439065   40508 command_runner.go:130] >     {
	I0421 19:17:24.439073   40508 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0421 19:17:24.439080   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439085   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0421 19:17:24.439091   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439095   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439104   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0421 19:17:24.439114   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0421 19:17:24.439119   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439124   40508 command_runner.go:130] >       "size": "117609952",
	I0421 19:17:24.439128   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439135   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439138   40508 command_runner.go:130] >       },
	I0421 19:17:24.439145   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439149   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439155   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439158   40508 command_runner.go:130] >     },
	I0421 19:17:24.439164   40508 command_runner.go:130] >     {
	I0421 19:17:24.439170   40508 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0421 19:17:24.439184   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439192   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0421 19:17:24.439198   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439201   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439211   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0421 19:17:24.439220   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0421 19:17:24.439229   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439234   40508 command_runner.go:130] >       "size": "112170310",
	I0421 19:17:24.439240   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439244   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439250   40508 command_runner.go:130] >       },
	I0421 19:17:24.439254   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439261   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439264   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439272   40508 command_runner.go:130] >     },
	I0421 19:17:24.439275   40508 command_runner.go:130] >     {
	I0421 19:17:24.439282   40508 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0421 19:17:24.439288   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439293   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0421 19:17:24.439298   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439302   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439318   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0421 19:17:24.439327   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0421 19:17:24.439333   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439338   40508 command_runner.go:130] >       "size": "85932953",
	I0421 19:17:24.439344   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.439348   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439354   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439358   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439363   40508 command_runner.go:130] >     },
	I0421 19:17:24.439367   40508 command_runner.go:130] >     {
	I0421 19:17:24.439375   40508 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0421 19:17:24.439379   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439390   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0421 19:17:24.439398   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439405   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439420   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0421 19:17:24.439434   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0421 19:17:24.439443   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439453   40508 command_runner.go:130] >       "size": "63026502",
	I0421 19:17:24.439462   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439468   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439477   40508 command_runner.go:130] >       },
	I0421 19:17:24.439487   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439496   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439505   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439513   40508 command_runner.go:130] >     },
	I0421 19:17:24.439517   40508 command_runner.go:130] >     {
	I0421 19:17:24.439524   40508 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0421 19:17:24.439530   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439534   40508 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0421 19:17:24.439540   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439546   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439555   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0421 19:17:24.439567   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0421 19:17:24.439574   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439578   40508 command_runner.go:130] >       "size": "750414",
	I0421 19:17:24.439584   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439589   40508 command_runner.go:130] >         "value": "65535"
	I0421 19:17:24.439594   40508 command_runner.go:130] >       },
	I0421 19:17:24.439598   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439605   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439609   40508 command_runner.go:130] >       "pinned": true
	I0421 19:17:24.439615   40508 command_runner.go:130] >     }
	I0421 19:17:24.439618   40508 command_runner.go:130] >   ]
	I0421 19:17:24.439624   40508 command_runner.go:130] > }
	I0421 19:17:24.439739   40508 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 19:17:24.439750   40508 cache_images.go:84] Images are preloaded, skipping loading
	I0421 19:17:24.439757   40508 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.0 crio true true} ...
	I0421 19:17:24.439853   40508 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-860427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-860427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:17:24.439912   40508 ssh_runner.go:195] Run: crio config
	I0421 19:17:24.487864   40508 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0421 19:17:24.487889   40508 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0421 19:17:24.487896   40508 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0421 19:17:24.487900   40508 command_runner.go:130] > #
	I0421 19:17:24.487907   40508 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0421 19:17:24.487914   40508 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0421 19:17:24.487920   40508 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0421 19:17:24.487929   40508 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0421 19:17:24.487933   40508 command_runner.go:130] > # reload'.
	I0421 19:17:24.487939   40508 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0421 19:17:24.487948   40508 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0421 19:17:24.487955   40508 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0421 19:17:24.487963   40508 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0421 19:17:24.487972   40508 command_runner.go:130] > [crio]
	I0421 19:17:24.487982   40508 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0421 19:17:24.487993   40508 command_runner.go:130] > # containers images, in this directory.
	I0421 19:17:24.488028   40508 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0421 19:17:24.488065   40508 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0421 19:17:24.488212   40508 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0421 19:17:24.488230   40508 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0421 19:17:24.488639   40508 command_runner.go:130] > # imagestore = ""
	I0421 19:17:24.488653   40508 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0421 19:17:24.488660   40508 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0421 19:17:24.488810   40508 command_runner.go:130] > storage_driver = "overlay"
	I0421 19:17:24.488825   40508 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0421 19:17:24.488831   40508 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0421 19:17:24.488836   40508 command_runner.go:130] > storage_option = [
	I0421 19:17:24.489032   40508 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0421 19:17:24.489081   40508 command_runner.go:130] > ]
	I0421 19:17:24.489098   40508 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0421 19:17:24.489109   40508 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0421 19:17:24.489241   40508 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0421 19:17:24.489256   40508 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0421 19:17:24.489265   40508 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0421 19:17:24.489273   40508 command_runner.go:130] > # always happen on a node reboot
	I0421 19:17:24.489749   40508 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0421 19:17:24.489770   40508 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0421 19:17:24.489780   40508 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0421 19:17:24.489791   40508 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0421 19:17:24.489921   40508 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0421 19:17:24.489940   40508 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0421 19:17:24.489958   40508 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0421 19:17:24.490332   40508 command_runner.go:130] > # internal_wipe = true
	I0421 19:17:24.490351   40508 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0421 19:17:24.490360   40508 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0421 19:17:24.490780   40508 command_runner.go:130] > # internal_repair = false
	I0421 19:17:24.490800   40508 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0421 19:17:24.490811   40508 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0421 19:17:24.490824   40508 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0421 19:17:24.491152   40508 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0421 19:17:24.491170   40508 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0421 19:17:24.491178   40508 command_runner.go:130] > [crio.api]
	I0421 19:17:24.491190   40508 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0421 19:17:24.491640   40508 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0421 19:17:24.491663   40508 command_runner.go:130] > # IP address on which the stream server will listen.
	I0421 19:17:24.491931   40508 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0421 19:17:24.491950   40508 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0421 19:17:24.491959   40508 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0421 19:17:24.492339   40508 command_runner.go:130] > # stream_port = "0"
	I0421 19:17:24.492358   40508 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0421 19:17:24.492746   40508 command_runner.go:130] > # stream_enable_tls = false
	I0421 19:17:24.492763   40508 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0421 19:17:24.492954   40508 command_runner.go:130] > # stream_idle_timeout = ""
	I0421 19:17:24.492972   40508 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0421 19:17:24.492983   40508 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0421 19:17:24.492990   40508 command_runner.go:130] > # minutes.
	I0421 19:17:24.493284   40508 command_runner.go:130] > # stream_tls_cert = ""
	I0421 19:17:24.493305   40508 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0421 19:17:24.493314   40508 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0421 19:17:24.493712   40508 command_runner.go:130] > # stream_tls_key = ""
	I0421 19:17:24.493726   40508 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0421 19:17:24.493737   40508 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0421 19:17:24.493753   40508 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0421 19:17:24.493765   40508 command_runner.go:130] > # stream_tls_ca = ""
	I0421 19:17:24.493791   40508 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0421 19:17:24.493803   40508 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0421 19:17:24.493815   40508 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0421 19:17:24.493827   40508 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0421 19:17:24.493839   40508 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0421 19:17:24.493852   40508 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0421 19:17:24.493861   40508 command_runner.go:130] > [crio.runtime]
	I0421 19:17:24.493872   40508 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0421 19:17:24.493884   40508 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0421 19:17:24.493906   40508 command_runner.go:130] > # "nofile=1024:2048"
	I0421 19:17:24.493919   40508 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0421 19:17:24.493926   40508 command_runner.go:130] > # default_ulimits = [
	I0421 19:17:24.493933   40508 command_runner.go:130] > # ]
	I0421 19:17:24.493953   40508 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0421 19:17:24.493963   40508 command_runner.go:130] > # no_pivot = false
	I0421 19:17:24.493977   40508 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0421 19:17:24.493991   40508 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0421 19:17:24.494004   40508 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0421 19:17:24.494018   40508 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0421 19:17:24.494033   40508 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0421 19:17:24.494049   40508 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0421 19:17:24.494069   40508 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0421 19:17:24.494078   40508 command_runner.go:130] > # Cgroup setting for conmon
	I0421 19:17:24.494093   40508 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0421 19:17:24.494104   40508 command_runner.go:130] > conmon_cgroup = "pod"
	I0421 19:17:24.494118   40508 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0421 19:17:24.494130   40508 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0421 19:17:24.494143   40508 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0421 19:17:24.494152   40508 command_runner.go:130] > conmon_env = [
	I0421 19:17:24.494165   40508 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0421 19:17:24.494173   40508 command_runner.go:130] > ]
	I0421 19:17:24.494183   40508 command_runner.go:130] > # Additional environment variables to set for all the
	I0421 19:17:24.494192   40508 command_runner.go:130] > # containers. These are overridden if set in the
	I0421 19:17:24.494205   40508 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0421 19:17:24.494215   40508 command_runner.go:130] > # default_env = [
	I0421 19:17:24.494222   40508 command_runner.go:130] > # ]
	I0421 19:17:24.494237   40508 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0421 19:17:24.494253   40508 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0421 19:17:24.494262   40508 command_runner.go:130] > # selinux = false
	I0421 19:17:24.494276   40508 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0421 19:17:24.494289   40508 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0421 19:17:24.494301   40508 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0421 19:17:24.494309   40508 command_runner.go:130] > # seccomp_profile = ""
	I0421 19:17:24.494324   40508 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0421 19:17:24.494336   40508 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0421 19:17:24.494350   40508 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0421 19:17:24.494361   40508 command_runner.go:130] > # which might increase security.
	I0421 19:17:24.494373   40508 command_runner.go:130] > # This option is currently deprecated,
	I0421 19:17:24.494386   40508 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0421 19:17:24.494398   40508 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0421 19:17:24.494412   40508 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0421 19:17:24.494426   40508 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0421 19:17:24.494440   40508 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0421 19:17:24.494453   40508 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0421 19:17:24.494465   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.494484   40508 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0421 19:17:24.494498   40508 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0421 19:17:24.494505   40508 command_runner.go:130] > # the cgroup blockio controller.
	I0421 19:17:24.494517   40508 command_runner.go:130] > # blockio_config_file = ""
	I0421 19:17:24.494531   40508 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0421 19:17:24.494541   40508 command_runner.go:130] > # blockio parameters.
	I0421 19:17:24.494549   40508 command_runner.go:130] > # blockio_reload = false
	I0421 19:17:24.494563   40508 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0421 19:17:24.494573   40508 command_runner.go:130] > # irqbalance daemon.
	I0421 19:17:24.494585   40508 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0421 19:17:24.494596   40508 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0421 19:17:24.494611   40508 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0421 19:17:24.494625   40508 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0421 19:17:24.494639   40508 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0421 19:17:24.494652   40508 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0421 19:17:24.494665   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.494674   40508 command_runner.go:130] > # rdt_config_file = ""
	I0421 19:17:24.494691   40508 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0421 19:17:24.494702   40508 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0421 19:17:24.494730   40508 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0421 19:17:24.494749   40508 command_runner.go:130] > # separate_pull_cgroup = ""
	I0421 19:17:24.494760   40508 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0421 19:17:24.494775   40508 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0421 19:17:24.494784   40508 command_runner.go:130] > # will be added.
	I0421 19:17:24.494792   40508 command_runner.go:130] > # default_capabilities = [
	I0421 19:17:24.494801   40508 command_runner.go:130] > # 	"CHOWN",
	I0421 19:17:24.494810   40508 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0421 19:17:24.494819   40508 command_runner.go:130] > # 	"FSETID",
	I0421 19:17:24.494826   40508 command_runner.go:130] > # 	"FOWNER",
	I0421 19:17:24.494840   40508 command_runner.go:130] > # 	"SETGID",
	I0421 19:17:24.494853   40508 command_runner.go:130] > # 	"SETUID",
	I0421 19:17:24.494863   40508 command_runner.go:130] > # 	"SETPCAP",
	I0421 19:17:24.494872   40508 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0421 19:17:24.494881   40508 command_runner.go:130] > # 	"KILL",
	I0421 19:17:24.494888   40508 command_runner.go:130] > # ]
	I0421 19:17:24.494903   40508 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0421 19:17:24.494917   40508 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0421 19:17:24.494928   40508 command_runner.go:130] > # add_inheritable_capabilities = false
	I0421 19:17:24.494939   40508 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0421 19:17:24.494951   40508 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0421 19:17:24.494959   40508 command_runner.go:130] > default_sysctls = [
	I0421 19:17:24.494976   40508 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0421 19:17:24.494985   40508 command_runner.go:130] > ]
	I0421 19:17:24.494994   40508 command_runner.go:130] > # List of devices on the host that a
	I0421 19:17:24.495007   40508 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0421 19:17:24.495017   40508 command_runner.go:130] > # allowed_devices = [
	I0421 19:17:24.495026   40508 command_runner.go:130] > # 	"/dev/fuse",
	I0421 19:17:24.495033   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495043   40508 command_runner.go:130] > # List of additional devices. specified as
	I0421 19:17:24.495056   40508 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0421 19:17:24.495068   40508 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0421 19:17:24.495081   40508 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0421 19:17:24.495092   40508 command_runner.go:130] > # additional_devices = [
	I0421 19:17:24.495100   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495110   40508 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0421 19:17:24.495120   40508 command_runner.go:130] > # cdi_spec_dirs = [
	I0421 19:17:24.495131   40508 command_runner.go:130] > # 	"/etc/cdi",
	I0421 19:17:24.495139   40508 command_runner.go:130] > # 	"/var/run/cdi",
	I0421 19:17:24.495148   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495158   40508 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0421 19:17:24.495172   40508 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0421 19:17:24.495182   40508 command_runner.go:130] > # Defaults to false.
	I0421 19:17:24.495191   40508 command_runner.go:130] > # device_ownership_from_security_context = false
	I0421 19:17:24.495205   40508 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0421 19:17:24.495218   40508 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0421 19:17:24.495227   40508 command_runner.go:130] > # hooks_dir = [
	I0421 19:17:24.495235   40508 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0421 19:17:24.495243   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495254   40508 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0421 19:17:24.495268   40508 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0421 19:17:24.495279   40508 command_runner.go:130] > # its default mounts from the following two files:
	I0421 19:17:24.495287   40508 command_runner.go:130] > #
	I0421 19:17:24.495298   40508 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0421 19:17:24.495311   40508 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0421 19:17:24.495323   40508 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0421 19:17:24.495331   40508 command_runner.go:130] > #
	I0421 19:17:24.495342   40508 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0421 19:17:24.495355   40508 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0421 19:17:24.495369   40508 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0421 19:17:24.495381   40508 command_runner.go:130] > #      only add mounts it finds in this file.
	I0421 19:17:24.495390   40508 command_runner.go:130] > #
	I0421 19:17:24.495397   40508 command_runner.go:130] > # default_mounts_file = ""
	I0421 19:17:24.495408   40508 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0421 19:17:24.495423   40508 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0421 19:17:24.495433   40508 command_runner.go:130] > pids_limit = 1024
	I0421 19:17:24.495444   40508 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0421 19:17:24.495458   40508 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0421 19:17:24.495472   40508 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0421 19:17:24.495489   40508 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0421 19:17:24.495498   40508 command_runner.go:130] > # log_size_max = -1
	I0421 19:17:24.495510   40508 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0421 19:17:24.495519   40508 command_runner.go:130] > # log_to_journald = false
	I0421 19:17:24.495531   40508 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0421 19:17:24.495543   40508 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0421 19:17:24.495555   40508 command_runner.go:130] > # Path to directory for container attach sockets.
	I0421 19:17:24.495567   40508 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0421 19:17:24.495578   40508 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0421 19:17:24.495585   40508 command_runner.go:130] > # bind_mount_prefix = ""
	I0421 19:17:24.495598   40508 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0421 19:17:24.495605   40508 command_runner.go:130] > # read_only = false
	I0421 19:17:24.495618   40508 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0421 19:17:24.495632   40508 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0421 19:17:24.495642   40508 command_runner.go:130] > # live configuration reload.
	I0421 19:17:24.495649   40508 command_runner.go:130] > # log_level = "info"
	I0421 19:17:24.495662   40508 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0421 19:17:24.495674   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.495686   40508 command_runner.go:130] > # log_filter = ""
	I0421 19:17:24.495697   40508 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0421 19:17:24.495710   40508 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0421 19:17:24.495721   40508 command_runner.go:130] > # separated by comma.
	I0421 19:17:24.495738   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495748   40508 command_runner.go:130] > # uid_mappings = ""
	I0421 19:17:24.495759   40508 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0421 19:17:24.495771   40508 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0421 19:17:24.495781   40508 command_runner.go:130] > # separated by comma.
	I0421 19:17:24.495795   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495804   40508 command_runner.go:130] > # gid_mappings = ""
	I0421 19:17:24.495815   40508 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0421 19:17:24.495829   40508 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0421 19:17:24.495846   40508 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0421 19:17:24.495862   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495872   40508 command_runner.go:130] > # minimum_mappable_uid = -1
	I0421 19:17:24.495882   40508 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0421 19:17:24.495896   40508 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0421 19:17:24.495910   40508 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0421 19:17:24.495926   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495936   40508 command_runner.go:130] > # minimum_mappable_gid = -1
	I0421 19:17:24.495950   40508 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0421 19:17:24.495964   40508 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0421 19:17:24.495977   40508 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0421 19:17:24.495987   40508 command_runner.go:130] > # ctr_stop_timeout = 30
	I0421 19:17:24.495998   40508 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0421 19:17:24.496013   40508 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0421 19:17:24.496025   40508 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0421 19:17:24.496034   40508 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0421 19:17:24.496043   40508 command_runner.go:130] > drop_infra_ctr = false
	I0421 19:17:24.496053   40508 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0421 19:17:24.496065   40508 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0421 19:17:24.496080   40508 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0421 19:17:24.496090   40508 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0421 19:17:24.496104   40508 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0421 19:17:24.496117   40508 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0421 19:17:24.496129   40508 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0421 19:17:24.496138   40508 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0421 19:17:24.496148   40508 command_runner.go:130] > # shared_cpuset = ""
	I0421 19:17:24.496159   40508 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0421 19:17:24.496170   40508 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0421 19:17:24.496181   40508 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0421 19:17:24.496194   40508 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0421 19:17:24.496204   40508 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0421 19:17:24.496217   40508 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0421 19:17:24.496227   40508 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0421 19:17:24.496238   40508 command_runner.go:130] > # enable_criu_support = false
	I0421 19:17:24.496250   40508 command_runner.go:130] > # Enable/disable the generation of the container,
	I0421 19:17:24.496261   40508 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0421 19:17:24.496275   40508 command_runner.go:130] > # enable_pod_events = false
	I0421 19:17:24.496288   40508 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0421 19:17:24.496301   40508 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0421 19:17:24.496310   40508 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0421 19:17:24.496320   40508 command_runner.go:130] > # default_runtime = "runc"
	I0421 19:17:24.496330   40508 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0421 19:17:24.496346   40508 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0421 19:17:24.496365   40508 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0421 19:17:24.496376   40508 command_runner.go:130] > # creation as a file is not desired either.
	I0421 19:17:24.496395   40508 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0421 19:17:24.496406   40508 command_runner.go:130] > # the hostname is being managed dynamically.
	I0421 19:17:24.496415   40508 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0421 19:17:24.496423   40508 command_runner.go:130] > # ]
	I0421 19:17:24.496434   40508 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0421 19:17:24.496448   40508 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0421 19:17:24.496462   40508 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0421 19:17:24.496473   40508 command_runner.go:130] > # Each entry in the table should follow the format:
	I0421 19:17:24.496478   40508 command_runner.go:130] > #
	I0421 19:17:24.496489   40508 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0421 19:17:24.496501   40508 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0421 19:17:24.496527   40508 command_runner.go:130] > # runtime_type = "oci"
	I0421 19:17:24.496537   40508 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0421 19:17:24.496549   40508 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0421 19:17:24.496561   40508 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0421 19:17:24.496570   40508 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0421 19:17:24.496579   40508 command_runner.go:130] > # monitor_env = []
	I0421 19:17:24.496588   40508 command_runner.go:130] > # privileged_without_host_devices = false
	I0421 19:17:24.496598   40508 command_runner.go:130] > # allowed_annotations = []
	I0421 19:17:24.496611   40508 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0421 19:17:24.496620   40508 command_runner.go:130] > # Where:
	I0421 19:17:24.496629   40508 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0421 19:17:24.496643   40508 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0421 19:17:24.496657   40508 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0421 19:17:24.496668   40508 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0421 19:17:24.496682   40508 command_runner.go:130] > #   in $PATH.
	I0421 19:17:24.496695   40508 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0421 19:17:24.496706   40508 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0421 19:17:24.496721   40508 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0421 19:17:24.496730   40508 command_runner.go:130] > #   state.
	I0421 19:17:24.496741   40508 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0421 19:17:24.496754   40508 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0421 19:17:24.496767   40508 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0421 19:17:24.496779   40508 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0421 19:17:24.496793   40508 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0421 19:17:24.496807   40508 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0421 19:17:24.496819   40508 command_runner.go:130] > #   The currently recognized values are:
	I0421 19:17:24.496831   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0421 19:17:24.496846   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0421 19:17:24.496860   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0421 19:17:24.496873   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0421 19:17:24.496889   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0421 19:17:24.496903   40508 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0421 19:17:24.496918   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0421 19:17:24.496932   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0421 19:17:24.496946   40508 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0421 19:17:24.496960   40508 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0421 19:17:24.496970   40508 command_runner.go:130] > #   deprecated option "conmon".
	I0421 19:17:24.496984   40508 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0421 19:17:24.496993   40508 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0421 19:17:24.497007   40508 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0421 19:17:24.497018   40508 command_runner.go:130] > #   should be moved to the container's cgroup
	I0421 19:17:24.497033   40508 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0421 19:17:24.497045   40508 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0421 19:17:24.497059   40508 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0421 19:17:24.497071   40508 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0421 19:17:24.497078   40508 command_runner.go:130] > #
	I0421 19:17:24.497087   40508 command_runner.go:130] > # Using the seccomp notifier feature:
	I0421 19:17:24.497095   40508 command_runner.go:130] > #
	I0421 19:17:24.497106   40508 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0421 19:17:24.497121   40508 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0421 19:17:24.497128   40508 command_runner.go:130] > #
	I0421 19:17:24.497141   40508 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0421 19:17:24.497155   40508 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0421 19:17:24.497163   40508 command_runner.go:130] > #
	I0421 19:17:24.497174   40508 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0421 19:17:24.497183   40508 command_runner.go:130] > # feature.
	I0421 19:17:24.497189   40508 command_runner.go:130] > #
	I0421 19:17:24.497202   40508 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0421 19:17:24.497215   40508 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0421 19:17:24.497229   40508 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0421 19:17:24.497242   40508 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0421 19:17:24.497257   40508 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0421 19:17:24.497264   40508 command_runner.go:130] > #
	I0421 19:17:24.497274   40508 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0421 19:17:24.497288   40508 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0421 19:17:24.497298   40508 command_runner.go:130] > #
	I0421 19:17:24.497309   40508 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0421 19:17:24.497322   40508 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0421 19:17:24.497330   40508 command_runner.go:130] > #
	I0421 19:17:24.497340   40508 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0421 19:17:24.497353   40508 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0421 19:17:24.497363   40508 command_runner.go:130] > # limitation.
	I0421 19:17:24.497371   40508 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0421 19:17:24.497381   40508 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0421 19:17:24.497390   40508 command_runner.go:130] > runtime_type = "oci"
	I0421 19:17:24.497400   40508 command_runner.go:130] > runtime_root = "/run/runc"
	I0421 19:17:24.497410   40508 command_runner.go:130] > runtime_config_path = ""
	I0421 19:17:24.497419   40508 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0421 19:17:24.497427   40508 command_runner.go:130] > monitor_cgroup = "pod"
	I0421 19:17:24.497437   40508 command_runner.go:130] > monitor_exec_cgroup = ""
	I0421 19:17:24.497447   40508 command_runner.go:130] > monitor_env = [
	I0421 19:17:24.497458   40508 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0421 19:17:24.497466   40508 command_runner.go:130] > ]
	I0421 19:17:24.497475   40508 command_runner.go:130] > privileged_without_host_devices = false
	I0421 19:17:24.497488   40508 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0421 19:17:24.497500   40508 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0421 19:17:24.497513   40508 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0421 19:17:24.497527   40508 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0421 19:17:24.497543   40508 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0421 19:17:24.497556   40508 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0421 19:17:24.497578   40508 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0421 19:17:24.497594   40508 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0421 19:17:24.497606   40508 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0421 19:17:24.497619   40508 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0421 19:17:24.497627   40508 command_runner.go:130] > # Example:
	I0421 19:17:24.497636   40508 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0421 19:17:24.497648   40508 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0421 19:17:24.497661   40508 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0421 19:17:24.497673   40508 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0421 19:17:24.497686   40508 command_runner.go:130] > # cpuset = 0
	I0421 19:17:24.497695   40508 command_runner.go:130] > # cpushares = "0-1"
	I0421 19:17:24.497702   40508 command_runner.go:130] > # Where:
	I0421 19:17:24.497710   40508 command_runner.go:130] > # The workload name is workload-type.
	I0421 19:17:24.497725   40508 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0421 19:17:24.497738   40508 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0421 19:17:24.497750   40508 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0421 19:17:24.497767   40508 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0421 19:17:24.497780   40508 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0421 19:17:24.497793   40508 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0421 19:17:24.497808   40508 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0421 19:17:24.497819   40508 command_runner.go:130] > # Default value is set to true
	I0421 19:17:24.497827   40508 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0421 19:17:24.497840   40508 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0421 19:17:24.497851   40508 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0421 19:17:24.497861   40508 command_runner.go:130] > # Default value is set to 'false'
	I0421 19:17:24.497869   40508 command_runner.go:130] > # disable_hostport_mapping = false
	I0421 19:17:24.497883   40508 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0421 19:17:24.497892   40508 command_runner.go:130] > #
	I0421 19:17:24.497903   40508 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0421 19:17:24.497917   40508 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0421 19:17:24.497931   40508 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0421 19:17:24.497942   40508 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0421 19:17:24.497950   40508 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0421 19:17:24.497954   40508 command_runner.go:130] > [crio.image]
	I0421 19:17:24.497962   40508 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0421 19:17:24.497968   40508 command_runner.go:130] > # default_transport = "docker://"
	I0421 19:17:24.497980   40508 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0421 19:17:24.497990   40508 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0421 19:17:24.497996   40508 command_runner.go:130] > # global_auth_file = ""
	I0421 19:17:24.498003   40508 command_runner.go:130] > # The image used to instantiate infra containers.
	I0421 19:17:24.498011   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.498019   40508 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0421 19:17:24.498029   40508 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0421 19:17:24.498040   40508 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0421 19:17:24.498048   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.498068   40508 command_runner.go:130] > # pause_image_auth_file = ""
	I0421 19:17:24.498078   40508 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0421 19:17:24.498088   40508 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0421 19:17:24.498099   40508 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0421 19:17:24.498109   40508 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0421 19:17:24.498116   40508 command_runner.go:130] > # pause_command = "/pause"
	I0421 19:17:24.498126   40508 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0421 19:17:24.498136   40508 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0421 19:17:24.498146   40508 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0421 19:17:24.498155   40508 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0421 19:17:24.498165   40508 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0421 19:17:24.498178   40508 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0421 19:17:24.498184   40508 command_runner.go:130] > # pinned_images = [
	I0421 19:17:24.498190   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498202   40508 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0421 19:17:24.498216   40508 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0421 19:17:24.498230   40508 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0421 19:17:24.498244   40508 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0421 19:17:24.498258   40508 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0421 19:17:24.498267   40508 command_runner.go:130] > # signature_policy = ""
	I0421 19:17:24.498279   40508 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0421 19:17:24.498293   40508 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0421 19:17:24.498307   40508 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0421 19:17:24.498321   40508 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0421 19:17:24.498333   40508 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0421 19:17:24.498344   40508 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0421 19:17:24.498358   40508 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0421 19:17:24.498375   40508 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0421 19:17:24.498385   40508 command_runner.go:130] > # changing them here.
	I0421 19:17:24.498394   40508 command_runner.go:130] > # insecure_registries = [
	I0421 19:17:24.498402   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498413   40508 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0421 19:17:24.498425   40508 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0421 19:17:24.498435   40508 command_runner.go:130] > # image_volumes = "mkdir"
	I0421 19:17:24.498447   40508 command_runner.go:130] > # Temporary directory to use for storing big files
	I0421 19:17:24.498458   40508 command_runner.go:130] > # big_files_temporary_dir = ""
	I0421 19:17:24.498472   40508 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0421 19:17:24.498481   40508 command_runner.go:130] > # CNI plugins.
	I0421 19:17:24.498490   40508 command_runner.go:130] > [crio.network]
	I0421 19:17:24.498500   40508 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0421 19:17:24.498513   40508 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0421 19:17:24.498522   40508 command_runner.go:130] > # cni_default_network = ""
	I0421 19:17:24.498532   40508 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0421 19:17:24.498542   40508 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0421 19:17:24.498553   40508 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0421 19:17:24.498563   40508 command_runner.go:130] > # plugin_dirs = [
	I0421 19:17:24.498572   40508 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0421 19:17:24.498581   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498591   40508 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0421 19:17:24.498601   40508 command_runner.go:130] > [crio.metrics]
	I0421 19:17:24.498611   40508 command_runner.go:130] > # Globally enable or disable metrics support.
	I0421 19:17:24.498621   40508 command_runner.go:130] > enable_metrics = true
	I0421 19:17:24.498631   40508 command_runner.go:130] > # Specify enabled metrics collectors.
	I0421 19:17:24.498640   40508 command_runner.go:130] > # Per default all metrics are enabled.
	I0421 19:17:24.498654   40508 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0421 19:17:24.498667   40508 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0421 19:17:24.498685   40508 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0421 19:17:24.498695   40508 command_runner.go:130] > # metrics_collectors = [
	I0421 19:17:24.498704   40508 command_runner.go:130] > # 	"operations",
	I0421 19:17:24.498715   40508 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0421 19:17:24.498727   40508 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0421 19:17:24.498738   40508 command_runner.go:130] > # 	"operations_errors",
	I0421 19:17:24.498746   40508 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0421 19:17:24.498757   40508 command_runner.go:130] > # 	"image_pulls_by_name",
	I0421 19:17:24.498768   40508 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0421 19:17:24.498778   40508 command_runner.go:130] > # 	"image_pulls_failures",
	I0421 19:17:24.498785   40508 command_runner.go:130] > # 	"image_pulls_successes",
	I0421 19:17:24.498792   40508 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0421 19:17:24.498800   40508 command_runner.go:130] > # 	"image_layer_reuse",
	I0421 19:17:24.498811   40508 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0421 19:17:24.498827   40508 command_runner.go:130] > # 	"containers_oom_total",
	I0421 19:17:24.498837   40508 command_runner.go:130] > # 	"containers_oom",
	I0421 19:17:24.498845   40508 command_runner.go:130] > # 	"processes_defunct",
	I0421 19:17:24.498854   40508 command_runner.go:130] > # 	"operations_total",
	I0421 19:17:24.498865   40508 command_runner.go:130] > # 	"operations_latency_seconds",
	I0421 19:17:24.498877   40508 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0421 19:17:24.498888   40508 command_runner.go:130] > # 	"operations_errors_total",
	I0421 19:17:24.498898   40508 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0421 19:17:24.498907   40508 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0421 19:17:24.498917   40508 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0421 19:17:24.498927   40508 command_runner.go:130] > # 	"image_pulls_success_total",
	I0421 19:17:24.498935   40508 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0421 19:17:24.498945   40508 command_runner.go:130] > # 	"containers_oom_count_total",
	I0421 19:17:24.498953   40508 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0421 19:17:24.498964   40508 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0421 19:17:24.498972   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498982   40508 command_runner.go:130] > # The port on which the metrics server will listen.
	I0421 19:17:24.498992   40508 command_runner.go:130] > # metrics_port = 9090
	I0421 19:17:24.499009   40508 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0421 19:17:24.499018   40508 command_runner.go:130] > # metrics_socket = ""
	I0421 19:17:24.499027   40508 command_runner.go:130] > # The certificate for the secure metrics server.
	I0421 19:17:24.499039   40508 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0421 19:17:24.499051   40508 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0421 19:17:24.499062   40508 command_runner.go:130] > # certificate on any modification event.
	I0421 19:17:24.499070   40508 command_runner.go:130] > # metrics_cert = ""
	I0421 19:17:24.499082   40508 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0421 19:17:24.499094   40508 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0421 19:17:24.499104   40508 command_runner.go:130] > # metrics_key = ""
	I0421 19:17:24.499117   40508 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0421 19:17:24.499126   40508 command_runner.go:130] > [crio.tracing]
	I0421 19:17:24.499135   40508 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0421 19:17:24.499146   40508 command_runner.go:130] > # enable_tracing = false
	I0421 19:17:24.499158   40508 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0421 19:17:24.499169   40508 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0421 19:17:24.499183   40508 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0421 19:17:24.499195   40508 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0421 19:17:24.499205   40508 command_runner.go:130] > # CRI-O NRI configuration.
	I0421 19:17:24.499214   40508 command_runner.go:130] > [crio.nri]
	I0421 19:17:24.499222   40508 command_runner.go:130] > # Globally enable or disable NRI.
	I0421 19:17:24.499232   40508 command_runner.go:130] > # enable_nri = false
	I0421 19:17:24.499240   40508 command_runner.go:130] > # NRI socket to listen on.
	I0421 19:17:24.499251   40508 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0421 19:17:24.499260   40508 command_runner.go:130] > # NRI plugin directory to use.
	I0421 19:17:24.499271   40508 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0421 19:17:24.499280   40508 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0421 19:17:24.499295   40508 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0421 19:17:24.499306   40508 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0421 19:17:24.499314   40508 command_runner.go:130] > # nri_disable_connections = false
	I0421 19:17:24.499326   40508 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0421 19:17:24.499337   40508 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0421 19:17:24.499347   40508 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0421 19:17:24.499357   40508 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0421 19:17:24.499373   40508 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0421 19:17:24.499383   40508 command_runner.go:130] > [crio.stats]
	I0421 19:17:24.499394   40508 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0421 19:17:24.499407   40508 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0421 19:17:24.499418   40508 command_runner.go:130] > # stats_collection_period = 0
	I0421 19:17:24.499450   40508 command_runner.go:130] ! time="2024-04-21 19:17:24.460431256Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0421 19:17:24.499472   40508 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0421 19:17:24.499598   40508 cni.go:84] Creating CNI manager for ""
	I0421 19:17:24.499611   40508 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 19:17:24.499622   40508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:17:24.499650   40508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-860427 NodeName:multinode-860427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 19:17:24.499823   40508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-860427"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:17:24.499895   40508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:17:24.511498   40508 command_runner.go:130] > kubeadm
	I0421 19:17:24.511522   40508 command_runner.go:130] > kubectl
	I0421 19:17:24.511529   40508 command_runner.go:130] > kubelet
	I0421 19:17:24.511547   40508 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:17:24.511589   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 19:17:24.523285   40508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0421 19:17:24.544057   40508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:17:24.564929   40508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0421 19:17:24.584813   40508 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0421 19:17:24.589598   40508 command_runner.go:130] > 192.168.39.100	control-plane.minikube.internal
	I0421 19:17:24.589676   40508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:17:24.737602   40508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:17:24.753202   40508 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427 for IP: 192.168.39.100
	I0421 19:17:24.753221   40508 certs.go:194] generating shared ca certs ...
	I0421 19:17:24.753240   40508 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:17:24.753508   40508 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 19:17:24.753582   40508 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 19:17:24.753599   40508 certs.go:256] generating profile certs ...
	I0421 19:17:24.753702   40508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/client.key
	I0421 19:17:24.753806   40508 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.key.9236eb8a
	I0421 19:17:24.753864   40508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.key
	I0421 19:17:24.753881   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:17:24.753908   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:17:24.753930   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:17:24.753949   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:17:24.753967   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:17:24.753989   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:17:24.754010   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:17:24.754028   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:17:24.754119   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 19:17:24.754170   40508 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 19:17:24.754186   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 19:17:24.754224   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 19:17:24.754259   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 19:17:24.754295   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 19:17:24.754364   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:17:24.754408   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 19:17:24.754435   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 19:17:24.754457   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:24.755029   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:17:24.782879   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:17:24.810882   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:17:24.837927   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:17:24.864205   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 19:17:24.890126   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 19:17:24.915900   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:17:24.942499   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 19:17:24.970418   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 19:17:24.996775   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 19:17:25.022862   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:17:25.050020   40508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:17:25.067996   40508 ssh_runner.go:195] Run: openssl version
	I0421 19:17:25.074240   40508 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 19:17:25.074367   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 19:17:25.085746   40508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.090720   40508 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.090820   40508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.090858   40508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.097185   40508 command_runner.go:130] > 51391683
	I0421 19:17:25.097333   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 19:17:25.113178   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 19:17:25.141549   40508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.146528   40508 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.146857   40508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.146919   40508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.152858   40508 command_runner.go:130] > 3ec20f2e
	I0421 19:17:25.153089   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:17:25.164160   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:17:25.176359   40508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.181163   40508 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.181217   40508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.181259   40508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.187222   40508 command_runner.go:130] > b5213941
	I0421 19:17:25.187288   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:17:25.197922   40508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:17:25.202731   40508 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:17:25.202753   40508 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0421 19:17:25.202762   40508 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0421 19:17:25.202772   40508 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 19:17:25.202783   40508 command_runner.go:130] > Access: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202795   40508 command_runner.go:130] > Modify: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202807   40508 command_runner.go:130] > Change: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202819   40508 command_runner.go:130] >  Birth: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202863   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 19:17:25.209001   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.209065   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 19:17:25.215069   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.215124   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 19:17:25.220934   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.220982   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 19:17:25.227069   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.227110   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 19:17:25.232789   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.232834   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 19:17:25.238652   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.238961   40508 kubeadm.go:391] StartCluster: {Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-860427
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:
false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:17:25.239070   40508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 19:17:25.239113   40508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:17:25.279018   40508 command_runner.go:130] > 9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273
	I0421 19:17:25.279041   40508 command_runner.go:130] > ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1
	I0421 19:17:25.279056   40508 command_runner.go:130] > 1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df
	I0421 19:17:25.279064   40508 command_runner.go:130] > 8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5
	I0421 19:17:25.279072   40508 command_runner.go:130] > c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4
	I0421 19:17:25.279081   40508 command_runner.go:130] > 9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5
	I0421 19:17:25.279092   40508 command_runner.go:130] > 8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d
	I0421 19:17:25.279110   40508 command_runner.go:130] > cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2
	I0421 19:17:25.280500   40508 cri.go:89] found id: "9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273"
	I0421 19:17:25.280521   40508 cri.go:89] found id: "ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1"
	I0421 19:17:25.280526   40508 cri.go:89] found id: "1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df"
	I0421 19:17:25.280531   40508 cri.go:89] found id: "8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5"
	I0421 19:17:25.280535   40508 cri.go:89] found id: "c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4"
	I0421 19:17:25.280542   40508 cri.go:89] found id: "9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5"
	I0421 19:17:25.280546   40508 cri.go:89] found id: "8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d"
	I0421 19:17:25.280551   40508 cri.go:89] found id: "cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2"
	I0421 19:17:25.280555   40508 cri.go:89] found id: ""
	I0421 19:17:25.280601   40508 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.434640619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727134434614764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c95773cb-8686-4ba7-96a0-d1ba24282f16 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.435488545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a7160b0-5e35-4edb-ad4d-53e4a492b356 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.435544248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a7160b0-5e35-4edb-ad4d-53e4a492b356 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.435939432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99b231417b3438f8e21b45626ad6634b376c97c137340965997d66b38413ee9,PodSandboxId:fc3c3ebed26c2e8ca4b8bdf29dabbd2a9e13239c72fcc027db0fc81cb7c46e69,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713726740617910530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273,PodSandboxId:1a7205876fa91557354f0eac108930b80f8130d15eeac57a664520281909041b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713726686310763910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1,PodSandboxId:d1c9976590750bf88d5979bb4bae08b0f16eb3adce8935df4e1e2c0b2a23c163,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713726685796854467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df,PodSandboxId:b5257a5fa2e934b51b4d35599b9904d8cc3fb3076ccf81487c5a434013e93212,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713726684201436271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5,PodSandboxId:13c2da90f46588b5089448006fcab8df169f80ae9ebba54faeaabb426006ea94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713726683882883738,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-b
aa3582ae821,},Annotations:map[string]string{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4,PodSandboxId:7f29e8747a7f1182cce691d55fa10130ba7fa64851cdaf0ff0f1748b59b6db2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713726664193437387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d,PodSandboxId:494d9a87baced0443fe2ea73261277528708e2ae860baafebc100a741a3147ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713726664146128437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[
string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5,PodSandboxId:aee378e8ac0dca7363b5e397b407d13b7a9214c6c3d2d1fd3be19378d069eccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713726664160418282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 6
3419c08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2,PodSandboxId:1d6252d59eb269344679e30121670c6fb9c096f626c96796e4cf5b5c2a2e4553,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713726664089174126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a7160b0-5e35-4edb-ad4d-53e4a492b356 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.491051810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c36040c3-4f10-4420-8461-5e54fea79819 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.491133154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c36040c3-4f10-4420-8461-5e54fea79819 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.492814524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74b31d91-5ca9-422c-8c67-acec4b2a68e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.493433649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727134493407506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74b31d91-5ca9-422c-8c67-acec4b2a68e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.494072074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2172eae-cbad-4ef5-9131-eb4fd73ee5fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.494164721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2172eae-cbad-4ef5-9131-eb4fd73ee5fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.494619914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99b231417b3438f8e21b45626ad6634b376c97c137340965997d66b38413ee9,PodSandboxId:fc3c3ebed26c2e8ca4b8bdf29dabbd2a9e13239c72fcc027db0fc81cb7c46e69,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713726740617910530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273,PodSandboxId:1a7205876fa91557354f0eac108930b80f8130d15eeac57a664520281909041b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713726686310763910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1,PodSandboxId:d1c9976590750bf88d5979bb4bae08b0f16eb3adce8935df4e1e2c0b2a23c163,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713726685796854467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df,PodSandboxId:b5257a5fa2e934b51b4d35599b9904d8cc3fb3076ccf81487c5a434013e93212,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713726684201436271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5,PodSandboxId:13c2da90f46588b5089448006fcab8df169f80ae9ebba54faeaabb426006ea94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713726683882883738,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-b
aa3582ae821,},Annotations:map[string]string{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4,PodSandboxId:7f29e8747a7f1182cce691d55fa10130ba7fa64851cdaf0ff0f1748b59b6db2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713726664193437387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d,PodSandboxId:494d9a87baced0443fe2ea73261277528708e2ae860baafebc100a741a3147ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713726664146128437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[
string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5,PodSandboxId:aee378e8ac0dca7363b5e397b407d13b7a9214c6c3d2d1fd3be19378d069eccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713726664160418282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 6
3419c08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2,PodSandboxId:1d6252d59eb269344679e30121670c6fb9c096f626c96796e4cf5b5c2a2e4553,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713726664089174126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2172eae-cbad-4ef5-9131-eb4fd73ee5fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.547451559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70a8a18a-c4bf-4509-81f7-480d39451661 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.547616885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70a8a18a-c4bf-4509-81f7-480d39451661 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.550111159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03de1664-495b-45c4-bbe7-2547d756add7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.550635052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727134550600170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03de1664-495b-45c4-bbe7-2547d756add7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.551520817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=183be808-c48c-4167-9a02-ed4191757e16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.551602982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=183be808-c48c-4167-9a02-ed4191757e16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.551993087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99b231417b3438f8e21b45626ad6634b376c97c137340965997d66b38413ee9,PodSandboxId:fc3c3ebed26c2e8ca4b8bdf29dabbd2a9e13239c72fcc027db0fc81cb7c46e69,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713726740617910530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273,PodSandboxId:1a7205876fa91557354f0eac108930b80f8130d15eeac57a664520281909041b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713726686310763910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1,PodSandboxId:d1c9976590750bf88d5979bb4bae08b0f16eb3adce8935df4e1e2c0b2a23c163,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713726685796854467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df,PodSandboxId:b5257a5fa2e934b51b4d35599b9904d8cc3fb3076ccf81487c5a434013e93212,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713726684201436271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5,PodSandboxId:13c2da90f46588b5089448006fcab8df169f80ae9ebba54faeaabb426006ea94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713726683882883738,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-b
aa3582ae821,},Annotations:map[string]string{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4,PodSandboxId:7f29e8747a7f1182cce691d55fa10130ba7fa64851cdaf0ff0f1748b59b6db2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713726664193437387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d,PodSandboxId:494d9a87baced0443fe2ea73261277528708e2ae860baafebc100a741a3147ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713726664146128437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[
string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5,PodSandboxId:aee378e8ac0dca7363b5e397b407d13b7a9214c6c3d2d1fd3be19378d069eccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713726664160418282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 6
3419c08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2,PodSandboxId:1d6252d59eb269344679e30121670c6fb9c096f626c96796e4cf5b5c2a2e4553,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713726664089174126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=183be808-c48c-4167-9a02-ed4191757e16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.601818073Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88720a88-ab85-470a-967b-4f50bab3469c name=/runtime.v1.RuntimeService/Version
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.601922902Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88720a88-ab85-470a-967b-4f50bab3469c name=/runtime.v1.RuntimeService/Version
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.603632776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45f49843-faab-4bbc-bc4e-8665da44ff50 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.604041884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727134604019335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45f49843-faab-4bbc-bc4e-8665da44ff50 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.604656144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e6bd534-2d56-4db4-9538-b64e1e7d785e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.604748414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e6bd534-2d56-4db4-9538-b64e1e7d785e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:18:54 multinode-860427 crio[2885]: time="2024-04-21 19:18:54.605720518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99b231417b3438f8e21b45626ad6634b376c97c137340965997d66b38413ee9,PodSandboxId:fc3c3ebed26c2e8ca4b8bdf29dabbd2a9e13239c72fcc027db0fc81cb7c46e69,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713726740617910530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273,PodSandboxId:1a7205876fa91557354f0eac108930b80f8130d15eeac57a664520281909041b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713726686310763910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1,PodSandboxId:d1c9976590750bf88d5979bb4bae08b0f16eb3adce8935df4e1e2c0b2a23c163,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713726685796854467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df,PodSandboxId:b5257a5fa2e934b51b4d35599b9904d8cc3fb3076ccf81487c5a434013e93212,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713726684201436271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5,PodSandboxId:13c2da90f46588b5089448006fcab8df169f80ae9ebba54faeaabb426006ea94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713726683882883738,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-b
aa3582ae821,},Annotations:map[string]string{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4,PodSandboxId:7f29e8747a7f1182cce691d55fa10130ba7fa64851cdaf0ff0f1748b59b6db2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713726664193437387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d,PodSandboxId:494d9a87baced0443fe2ea73261277528708e2ae860baafebc100a741a3147ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713726664146128437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[
string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5,PodSandboxId:aee378e8ac0dca7363b5e397b407d13b7a9214c6c3d2d1fd3be19378d069eccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713726664160418282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 6
3419c08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2,PodSandboxId:1d6252d59eb269344679e30121670c6fb9c096f626c96796e4cf5b5c2a2e4553,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713726664089174126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e6bd534-2d56-4db4-9538-b64e1e7d785e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f65de04738834       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      48 seconds ago       Running             busybox                   1                   14a6e43c576d5       busybox-fc5497c4f-hk7s7
	3a0e0b2881434       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   4ab17feb380c1       coredns-7db6d8ff4d-vs5t7
	719444f97ed78       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   dda1786cc67a6       kindnet-9ldwp
	0709ab54213b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   64265fe67f583       storage-provisioner
	90ad4fdb1c3dc       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   bec8bbb43bb93       kube-proxy-jg6s4
	fe218a845a3aa       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   251fa8224f7f0       kube-scheduler-multinode-860427
	b322cb92ca948       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   bc21dc0654e3a       etcd-multinode-860427
	3a7048938488c       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   ecf371bf70e95       kube-controller-manager-multinode-860427
	2c542f3c92581       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   dae5021f3823b       kube-apiserver-multinode-860427
	e99b231417b34       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   fc3c3ebed26c2       busybox-fc5497c4f-hk7s7
	9b0f66c6a810d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   1a7205876fa91       storage-provisioner
	ff5d612fdfb3e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   d1c9976590750       coredns-7db6d8ff4d-vs5t7
	1b1c152114f7d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   b5257a5fa2e93       kindnet-9ldwp
	8e02f2b64b9de       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   13c2da90f4658       kube-proxy-jg6s4
	c5b23d24e555c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago        Exited              kube-scheduler            0                   7f29e8747a7f1       kube-scheduler-multinode-860427
	9fb589731724c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   aee378e8ac0dc       etcd-multinode-860427
	8b1fa05f21062       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago        Exited              kube-apiserver            0                   494d9a87baced       kube-apiserver-multinode-860427
	cc29f46df3151       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago        Exited              kube-controller-manager   0                   1d6252d59eb26       kube-controller-manager-multinode-860427
	
	
	==> coredns [3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58639 - 59006 "HINFO IN 6551346853364553102.4752810316306585780. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026259095s
	
	
	==> coredns [ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1] <==
	[INFO] 10.244.0.3:58767 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001786061s
	[INFO] 10.244.0.3:37173 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00031116s
	[INFO] 10.244.0.3:59498 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071934s
	[INFO] 10.244.0.3:60013 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001165337s
	[INFO] 10.244.0.3:41860 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005795s
	[INFO] 10.244.0.3:60932 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095114s
	[INFO] 10.244.0.3:45075 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080404s
	[INFO] 10.244.1.2:35677 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249307s
	[INFO] 10.244.1.2:49702 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200978s
	[INFO] 10.244.1.2:58015 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111852s
	[INFO] 10.244.1.2:54380 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000260643s
	[INFO] 10.244.0.3:46013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012317s
	[INFO] 10.244.0.3:40454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008508s
	[INFO] 10.244.0.3:47947 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076971s
	[INFO] 10.244.0.3:37172 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000666s
	[INFO] 10.244.1.2:43856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163894s
	[INFO] 10.244.1.2:37507 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000690623s
	[INFO] 10.244.1.2:59238 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139066s
	[INFO] 10.244.1.2:60046 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159981s
	[INFO] 10.244.0.3:54205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080256s
	[INFO] 10.244.0.3:54530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000049187s
	[INFO] 10.244.0.3:50154 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000030665s
	[INFO] 10.244.0.3:52243 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00002735s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-860427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-860427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_11_10_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:11:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860427
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:18:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    multinode-860427
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bba815c2ad94d64bea00a33989824af
	  System UUID:                6bba815c-2ad9-4d64-bea0-0a33989824af
	  Boot ID:                    76a8137b-dbd7-47e7-bb06-0eb11c9e8461
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hk7s7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 coredns-7db6d8ff4d-vs5t7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m31s
	  kube-system                 etcd-multinode-860427                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kindnet-9ldwp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m32s
	  kube-system                 kube-apiserver-multinode-860427             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-controller-manager-multinode-860427    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-proxy-jg6s4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-scheduler-multinode-860427             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m30s              kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m45s              kubelet          Node multinode-860427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m45s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m45s              kubelet          Node multinode-860427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s              kubelet          Node multinode-860427 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m45s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m32s              node-controller  Node multinode-860427 event: Registered Node multinode-860427 in Controller
	  Normal  NodeReady                7m29s              kubelet          Node multinode-860427 status is now: NodeReady
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node multinode-860427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node multinode-860427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)  kubelet          Node multinode-860427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                node-controller  Node multinode-860427 event: Registered Node multinode-860427 in Controller
	
	
	Name:               multinode-860427-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860427-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-860427
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_18_12_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:18:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860427-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:18:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:18:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:18:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:18:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:18:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    multinode-860427-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2825fddabdeb4c06a5bb08fe55061b6a
	  System UUID:                2825fdda-bdeb-4c06-a5bb-08fe55061b6a
	  Boot ID:                    6f25b561-af0f-4196-8630-ca5efeabc205
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bsh66    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-nw7qf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m49s
	  kube-system                 kube-proxy-qwtz4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m44s                  kube-proxy  
	  Normal  Starting                 39s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m49s (x2 over 6m49s)  kubelet     Node multinode-860427-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m49s (x2 over 6m49s)  kubelet     Node multinode-860427-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m49s (x2 over 6m49s)  kubelet     Node multinode-860427-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m39s                  kubelet     Node multinode-860427-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  43s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s (x2 over 43s)      kubelet     Node multinode-860427-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x2 over 43s)      kubelet     Node multinode-860427-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x2 over 43s)      kubelet     Node multinode-860427-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                34s                    kubelet     Node multinode-860427-m02 status is now: NodeReady
	
	
	Name:               multinode-860427-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860427-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-860427
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_18_42_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:18:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860427-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:18:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:18:51 +0000   Sun, 21 Apr 2024 19:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:18:51 +0000   Sun, 21 Apr 2024 19:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:18:51 +0000   Sun, 21 Apr 2024 19:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:18:51 +0000   Sun, 21 Apr 2024 19:18:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    multinode-860427-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6787aa86d7ae4afd82be91fb135c7ad1
	  System UUID:                6787aa86-d7ae-4afd-82be-91fb135c7ad1
	  Boot ID:                    8cc91e69-0697-4679-8713-c1785a7e2662
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wtv4m       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-proxy-rpj7t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m55s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m14s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m1s (x2 over 6m1s)    kubelet     Node multinode-860427-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x2 over 6m1s)    kubelet     Node multinode-860427-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x2 over 6m1s)    kubelet     Node multinode-860427-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m51s                  kubelet     Node multinode-860427-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m19s (x2 over 5m19s)  kubelet     Node multinode-860427-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m19s (x2 over 5m19s)  kubelet     Node multinode-860427-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m19s (x2 over 5m19s)  kubelet     Node multinode-860427-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m10s                  kubelet     Node multinode-860427-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-860427-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-860427-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-860427-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-860427-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[ +10.968699] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.063420] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061360] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.170002] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.137810] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.329562] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.789014] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.064580] kauditd_printk_skb: 130 callbacks suppressed
	[Apr21 19:11] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +6.569054] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.083339] kauditd_printk_skb: 97 callbacks suppressed
	[ +13.734722] systemd-fstab-generator[1476]: Ignoring "noauto" option for root device
	[  +0.137373] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.229048] kauditd_printk_skb: 82 callbacks suppressed
	[Apr21 19:17] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.154820] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.185743] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.145363] systemd-fstab-generator[2842]: Ignoring "noauto" option for root device
	[  +0.297750] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +0.754231] systemd-fstab-generator[2971]: Ignoring "noauto" option for root device
	[  +1.855347] systemd-fstab-generator[3097]: Ignoring "noauto" option for root device
	[  +5.770520] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.137504] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.670673] systemd-fstab-generator[3918]: Ignoring "noauto" option for root device
	[Apr21 19:18] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5] <==
	{"level":"info","ts":"2024-04-21T19:11:04.666459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-04-21T19:11:04.666743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:11:04.67698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T19:11:04.682599Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:11:04.682702Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:11:04.682783Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:11:04.682959Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:11:04.682994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-04-21T19:11:54.25539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.984266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176521758976184188 > lease_revoke:<id:1e348f0211af0f47>","response":"size:27"}
	{"level":"warn","ts":"2024-04-21T19:12:05.45912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.380032ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176521758976184241 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-860427-m02.17c8616079a4e2d9\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-860427-m02.17c8616079a4e2d9\" value_size:642 lease:2176521758976183616 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-21T19:12:05.459658Z","caller":"traceutil/trace.go:171","msg":"trace[1465477537] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"234.978624ms","start":"2024-04-21T19:12:05.224662Z","end":"2024-04-21T19:12:05.45964Z","steps":["trace[1465477537] 'process raft request'  (duration: 72.630033ms)","trace[1465477537] 'compare'  (duration: 160.967897ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T19:12:05.459817Z","caller":"traceutil/trace.go:171","msg":"trace[66219805] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"182.837644ms","start":"2024-04-21T19:12:05.276882Z","end":"2024-04-21T19:12:05.45972Z","steps":["trace[66219805] 'process raft request'  (duration: 182.597618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T19:12:53.681442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.473442ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176521758976184648 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-860427-m03.17c8616bb3005bae\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-860427-m03.17c8616bb3005bae\" value_size:642 lease:2176521758976184317 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-21T19:12:53.681814Z","caller":"traceutil/trace.go:171","msg":"trace[1234211206] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"246.188608ms","start":"2024-04-21T19:12:53.435583Z","end":"2024-04-21T19:12:53.681772Z","steps":["trace[1234211206] 'process raft request'  (duration: 74.304647ms)","trace[1234211206] 'compare'  (duration: 171.276929ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T19:12:53.682562Z","caller":"traceutil/trace.go:171","msg":"trace[10322066] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"179.030374ms","start":"2024-04-21T19:12:53.503517Z","end":"2024-04-21T19:12:53.682548Z","steps":["trace[10322066] 'process raft request'  (duration: 178.144029ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T19:15:51.608493Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-21T19:15:51.608651Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-860427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"warn","ts":"2024-04-21T19:15:51.608858Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-21T19:15:51.60895Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-21T19:15:51.662405Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-21T19:15:51.662462Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-21T19:15:51.662533Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"info","ts":"2024-04-21T19:15:51.665869Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:15:51.666328Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:15:51.666366Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-860427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274] <==
	{"level":"info","ts":"2024-04-21T19:17:28.338031Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T19:17:28.338066Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T19:17:28.344147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2024-04-21T19:17:28.345465Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-04-21T19:17:28.347542Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:17:28.347671Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:17:28.358916Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-21T19:17:28.362457Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3276445ff8d31e34","initial-advertise-peer-urls":["https://192.168.39.100:2380"],"listen-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.100:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-21T19:17:28.362832Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T19:17:28.360493Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:17:28.365282Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:17:29.94487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-21T19:17:29.94491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-21T19:17:29.944955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2024-04-21T19:17:29.944969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.944975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.944983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.944993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.950393Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:multinode-860427 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:17:29.950475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:17:29.950632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:17:29.95119Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:17:29.95129Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T19:17:29.95312Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-04-21T19:17:29.953458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:18:55 up 8 min,  0 users,  load average: 0.11, 0.14, 0.09
	Linux multinode-860427 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df] <==
	I0421 19:15:05.334649       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:15.341520       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:15.341565       1 main.go:227] handling current node
	I0421 19:15:15.341576       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:15.341582       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:15.341698       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:15.341705       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:25.353601       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:25.353753       1 main.go:227] handling current node
	I0421 19:15:25.353778       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:25.353784       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:25.353908       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:25.353944       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:35.358583       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:35.358714       1 main.go:227] handling current node
	I0421 19:15:35.358755       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:35.358780       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:35.358915       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:35.358935       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:45.369434       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:45.369679       1 main.go:227] handling current node
	I0421 19:15:45.369710       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:45.369808       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:45.370145       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:45.370173       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc] <==
	I0421 19:18:13.422019       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:18:23.427911       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:18:23.427995       1 main.go:227] handling current node
	I0421 19:18:23.428018       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:18:23.428035       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:18:23.428260       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:18:23.428317       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:18:33.435754       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:18:33.435805       1 main.go:227] handling current node
	I0421 19:18:33.435815       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:18:33.435826       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:18:33.435925       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:18:33.435961       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:18:43.452132       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:18:43.452264       1 main.go:227] handling current node
	I0421 19:18:43.452312       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:18:43.452324       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:18:43.452510       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:18:43.452628       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.2.0/24] 
	I0421 19:18:53.458294       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:18:53.458344       1 main.go:227] handling current node
	I0421 19:18:53.458360       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:18:53.458366       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:18:53.458520       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:18:53.458550       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c] <==
	I0421 19:17:31.453427       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 19:17:31.453472       1 policy_source.go:224] refreshing policies
	I0421 19:17:31.464520       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0421 19:17:31.466811       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0421 19:17:31.466849       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0421 19:17:31.468112       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0421 19:17:31.468167       1 aggregator.go:165] initial CRD sync complete...
	I0421 19:17:31.468192       1 autoregister_controller.go:141] Starting autoregister controller
	I0421 19:17:31.468275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0421 19:17:31.468298       1 cache.go:39] Caches are synced for autoregister controller
	I0421 19:17:31.469357       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 19:17:31.469453       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 19:17:31.469887       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0421 19:17:31.469944       1 shared_informer.go:320] Caches are synced for configmaps
	I0421 19:17:31.472329       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 19:17:31.474726       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0421 19:17:31.480911       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0421 19:17:32.288938       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0421 19:17:33.842612       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0421 19:17:33.979515       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0421 19:17:33.993119       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0421 19:17:34.058927       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0421 19:17:34.065581       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0421 19:17:44.329393       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 19:17:44.359444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d] <==
	W0421 19:15:51.636479       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636530       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636582       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636640       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636691       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636741       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.637129       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.639578       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.639847       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.640529       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.640599       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.640751       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641029       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641099       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641152       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641319       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641381       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641438       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641568       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641618       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641662       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641710       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641755       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641797       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641840       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58] <==
	I0421 19:17:44.939066       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 19:17:44.978965       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 19:17:44.979043       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0421 19:18:07.582022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.870378ms"
	I0421 19:18:07.582142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.575µs"
	I0421 19:18:07.596188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.737692ms"
	I0421 19:18:07.596340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.083µs"
	I0421 19:18:12.027844       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m02\" does not exist"
	I0421 19:18:12.037703       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m02" podCIDRs=["10.244.1.0/24"]
	I0421 19:18:12.948535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.746µs"
	I0421 19:18:12.963925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.287µs"
	I0421 19:18:13.011738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.837µs"
	I0421 19:18:13.021361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.441µs"
	I0421 19:18:13.027752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.757µs"
	I0421 19:18:14.645744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.131µs"
	I0421 19:18:20.540551       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:18:20.561022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.444µs"
	I0421 19:18:20.577678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.807µs"
	I0421 19:18:24.415853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.162596ms"
	I0421 19:18:24.416464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.643µs"
	I0421 19:18:41.196324       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:18:42.200886       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m03\" does not exist"
	I0421 19:18:42.200979       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:18:42.211775       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m03" podCIDRs=["10.244.2.0/24"]
	I0421 19:18:51.470889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	
	
	==> kube-controller-manager [cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2] <==
	I0421 19:11:35.686976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.477µs"
	I0421 19:12:05.462722       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m02\" does not exist"
	I0421 19:12:05.494736       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m02" podCIDRs=["10.244.1.0/24"]
	I0421 19:12:07.197302       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-860427-m02"
	I0421 19:12:15.225082       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:12:17.678609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.643223ms"
	I0421 19:12:17.698828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.156317ms"
	I0421 19:12:17.698889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.01µs"
	I0421 19:12:21.165319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.556944ms"
	I0421 19:12:21.165613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.951µs"
	I0421 19:12:21.280925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.88178ms"
	I0421 19:12:21.281161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.518µs"
	I0421 19:12:53.684013       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m03\" does not exist"
	I0421 19:12:53.684750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:12:53.716789       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m03" podCIDRs=["10.244.2.0/24"]
	I0421 19:12:57.218387       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-860427-m03"
	I0421 19:13:03.976691       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:13:34.816612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:13:35.815353       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:13:35.817927       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m03\" does not exist"
	I0421 19:13:35.838678       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m03" podCIDRs=["10.244.3.0/24"]
	I0421 19:13:45.003553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:14:27.276100       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:14:32.375762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.872105ms"
	I0421 19:14:32.375925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.089µs"
	
	
	==> kube-proxy [8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5] <==
	I0421 19:11:24.264001       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:11:24.279167       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0421 19:11:24.358020       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:11:24.358085       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:11:24.358107       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:11:24.367090       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:11:24.367405       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:11:24.367443       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:11:24.368764       1 config.go:192] "Starting service config controller"
	I0421 19:11:24.368807       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:11:24.368831       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:11:24.368836       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:11:24.369129       1 config.go:319] "Starting node config controller"
	I0421 19:11:24.369169       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:11:24.471385       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:11:24.471442       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 19:11:24.471681       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1] <==
	I0421 19:17:32.817685       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:17:32.844730       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0421 19:17:32.945421       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:17:32.945542       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:17:32.945627       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:17:32.950727       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:17:32.951168       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:17:32.951313       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:17:32.952676       1 config.go:192] "Starting service config controller"
	I0421 19:17:32.955392       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:17:32.955539       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:17:32.955666       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:17:32.955690       1 config.go:319] "Starting node config controller"
	I0421 19:17:32.955764       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:17:33.055902       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 19:17:33.056003       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:17:33.057416       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4] <==
	E0421 19:11:07.007339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:11:07.007727       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:07.007846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:07.007997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:07.008105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:07.013361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:11:07.013481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:11:07.829032       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:11:07.829094       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:11:07.830068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:11:07.830250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:11:07.928365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:11:07.928503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:11:07.963783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:07.963813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:08.057077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 19:11:08.057107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 19:11:08.064099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 19:11:08.064286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 19:11:08.207119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:08.207277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:08.244181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 19:11:08.244285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0421 19:11:10.572888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0421 19:15:51.618748       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec] <==
	I0421 19:17:28.713684       1 serving.go:380] Generated self-signed cert in-memory
	W0421 19:17:31.358538       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0421 19:17:31.359156       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:17:31.359465       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0421 19:17:31.359643       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0421 19:17:31.389933       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0421 19:17:31.389986       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:17:31.391858       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0421 19:17:31.391910       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0421 19:17:31.391897       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0421 19:17:31.391915       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0421 19:17:31.492415       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 19:17:27 multinode-860427 kubelet[3104]: E0421 19:17:27.743940    3104 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	Apr 21 19:17:28 multinode-860427 kubelet[3104]: I0421 19:17:28.254830    3104 kubelet_node_status.go:73] "Attempting to register node" node="multinode-860427"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.532682    3104 kubelet_node_status.go:112] "Node was previously registered" node="multinode-860427"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.533071    3104 kubelet_node_status.go:76] "Successfully registered node" node="multinode-860427"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.534888    3104 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.535975    3104 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.716351    3104 apiserver.go:52] "Watching apiserver"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.718760    3104 topology_manager.go:215] "Topology Admit Handler" podUID="c804d5e1-21d2-488c-aa22-baa3582ae821" podNamespace="kube-system" podName="kube-proxy-jg6s4"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.720861    3104 topology_manager.go:215] "Topology Admit Handler" podUID="9fbc53d5-18bf-4b94-9431-79b4ec06767d" podNamespace="kube-system" podName="kindnet-9ldwp"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.721122    3104 topology_manager.go:215] "Topology Admit Handler" podUID="f4a7eaeb-e84d-43b3-803d-64ac0f894fa0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vs5t7"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.721467    3104 topology_manager.go:215] "Topology Admit Handler" podUID="2357556e-faa1-43ba-9e1a-f867acfd75fa" podNamespace="kube-system" podName="storage-provisioner"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.721735    3104 topology_manager.go:215] "Topology Admit Handler" podUID="826c848b-a674-490c-9703-ac39fbc95f4c" podNamespace="default" podName="busybox-fc5497c4f-hk7s7"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.734529    3104 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.771763    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c804d5e1-21d2-488c-aa22-baa3582ae821-lib-modules\") pod \"kube-proxy-jg6s4\" (UID: \"c804d5e1-21d2-488c-aa22-baa3582ae821\") " pod="kube-system/kube-proxy-jg6s4"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.771986    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fbc53d5-18bf-4b94-9431-79b4ec06767d-lib-modules\") pod \"kindnet-9ldwp\" (UID: \"9fbc53d5-18bf-4b94-9431-79b4ec06767d\") " pod="kube-system/kindnet-9ldwp"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772266    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2357556e-faa1-43ba-9e1a-f867acfd75fa-tmp\") pod \"storage-provisioner\" (UID: \"2357556e-faa1-43ba-9e1a-f867acfd75fa\") " pod="kube-system/storage-provisioner"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772390    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9fbc53d5-18bf-4b94-9431-79b4ec06767d-cni-cfg\") pod \"kindnet-9ldwp\" (UID: \"9fbc53d5-18bf-4b94-9431-79b4ec06767d\") " pod="kube-system/kindnet-9ldwp"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772566    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fbc53d5-18bf-4b94-9431-79b4ec06767d-xtables-lock\") pod \"kindnet-9ldwp\" (UID: \"9fbc53d5-18bf-4b94-9431-79b4ec06767d\") " pod="kube-system/kindnet-9ldwp"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772724    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c804d5e1-21d2-488c-aa22-baa3582ae821-xtables-lock\") pod \"kube-proxy-jg6s4\" (UID: \"c804d5e1-21d2-488c-aa22-baa3582ae821\") " pod="kube-system/kube-proxy-jg6s4"
	Apr 21 19:17:39 multinode-860427 kubelet[3104]: I0421 19:17:39.501272    3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 21 19:18:26 multinode-860427 kubelet[3104]: E0421 19:18:26.825164    3104 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:18:54.118416   42016 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18702-3854/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-860427 -n multinode-860427
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-860427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (308.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 stop
E0421 19:19:06.204757   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860427 stop: exit status 82 (2m0.498315389s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-860427-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-860427 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status
E0421 19:21:09.209373   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860427 status: exit status 3 (18.857960266s)

                                                
                                                
-- stdout --
	multinode-860427
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-860427-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:21:17.922483   42686 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0421 19:21:17.922517   42686 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-860427 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-860427 -n multinode-860427
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-860427 logs -n 25: (1.614941186s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427:/home/docker/cp-test_multinode-860427-m02_multinode-860427.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427 sudo cat                                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m02_multinode-860427.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03:/home/docker/cp-test_multinode-860427-m02_multinode-860427-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427-m03 sudo cat                                   | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m02_multinode-860427-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp testdata/cp-test.txt                                                | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2829491611/001/cp-test_multinode-860427-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427:/home/docker/cp-test_multinode-860427-m03_multinode-860427.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427 sudo cat                                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m03_multinode-860427.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt                       | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02:/home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427-m02 sudo cat                                   | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-860427 node stop m03                                                          | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	| node    | multinode-860427 node start                                                             | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-860427                                                                | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC |                     |
	| stop    | -p multinode-860427                                                                     | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC |                     |
	| start   | -p multinode-860427                                                                     | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:15 UTC | 21 Apr 24 19:18 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-860427                                                                | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:18 UTC |                     |
	| node    | multinode-860427 node delete                                                            | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:18 UTC | 21 Apr 24 19:18 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-860427 stop                                                                   | multinode-860427 | jenkins | v1.33.0 | 21 Apr 24 19:18 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:15:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:15:50.735473   40508 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:15:50.735592   40508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:15:50.735600   40508 out.go:304] Setting ErrFile to fd 2...
	I0421 19:15:50.735605   40508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:15:50.735814   40508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:15:50.736327   40508 out.go:298] Setting JSON to false
	I0421 19:15:50.737240   40508 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3449,"bootTime":1713723502,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:15:50.737294   40508 start.go:139] virtualization: kvm guest
	I0421 19:15:50.740312   40508 out.go:177] * [multinode-860427] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:15:50.741826   40508 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:15:50.743352   40508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:15:50.741795   40508 notify.go:220] Checking for updates...
	I0421 19:15:50.744734   40508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:15:50.746182   40508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:15:50.747383   40508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:15:50.748598   40508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:15:50.750218   40508 config.go:182] Loaded profile config "multinode-860427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:15:50.750296   40508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:15:50.750674   40508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:15:50.750712   40508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:15:50.765693   40508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0421 19:15:50.766109   40508 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:15:50.766690   40508 main.go:141] libmachine: Using API Version  1
	I0421 19:15:50.766715   40508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:15:50.767003   40508 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:15:50.767208   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:15:50.802725   40508 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:15:50.804139   40508 start.go:297] selected driver: kvm2
	I0421 19:15:50.804156   40508 start.go:901] validating driver "kvm2" against &{Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNam
e:multinode-860427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:15:50.804289   40508 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:15:50.804585   40508 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:15:50.804679   40508 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:15:50.821726   40508 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:15:50.822469   40508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:15:50.822537   40508 cni.go:84] Creating CNI manager for ""
	I0421 19:15:50.822550   40508 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 19:15:50.822614   40508 start.go:340] cluster config:
	{Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-860427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kube
virt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:15:50.822750   40508 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:15:50.824753   40508 out.go:177] * Starting "multinode-860427" primary control-plane node in "multinode-860427" cluster
	I0421 19:15:50.826111   40508 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:15:50.826157   40508 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:15:50.826168   40508 cache.go:56] Caching tarball of preloaded images
	I0421 19:15:50.826243   40508 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:15:50.826257   40508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 19:15:50.826385   40508 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/config.json ...
	I0421 19:15:50.826575   40508 start.go:360] acquireMachinesLock for multinode-860427: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:15:50.826621   40508 start.go:364] duration metric: took 26.502µs to acquireMachinesLock for "multinode-860427"
	I0421 19:15:50.826654   40508 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:15:50.826662   40508 fix.go:54] fixHost starting: 
	I0421 19:15:50.826931   40508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:15:50.826968   40508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:15:50.841411   40508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0421 19:15:50.841853   40508 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:15:50.842342   40508 main.go:141] libmachine: Using API Version  1
	I0421 19:15:50.842360   40508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:15:50.842708   40508 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:15:50.842917   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:15:50.843078   40508 main.go:141] libmachine: (multinode-860427) Calling .GetState
	I0421 19:15:50.844802   40508 fix.go:112] recreateIfNeeded on multinode-860427: state=Running err=<nil>
	W0421 19:15:50.844820   40508 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:15:50.847716   40508 out.go:177] * Updating the running kvm2 "multinode-860427" VM ...
	I0421 19:15:50.849140   40508 machine.go:94] provisionDockerMachine start ...
	I0421 19:15:50.849158   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:15:50.849408   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:50.852110   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.852577   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:50.852610   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.852752   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:50.852950   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.853114   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.853255   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:50.853417   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:50.853591   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:50.853600   40508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:15:50.972348   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860427
	
	I0421 19:15:50.972377   40508 main.go:141] libmachine: (multinode-860427) Calling .GetMachineName
	I0421 19:15:50.972598   40508 buildroot.go:166] provisioning hostname "multinode-860427"
	I0421 19:15:50.972620   40508 main.go:141] libmachine: (multinode-860427) Calling .GetMachineName
	I0421 19:15:50.972821   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:50.975591   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.975942   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:50.975973   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:50.976119   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:50.976333   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.976503   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:50.976701   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:50.976882   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:50.977090   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:50.977113   40508 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-860427 && echo "multinode-860427" | sudo tee /etc/hostname
	I0421 19:15:51.113207   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860427
	
	I0421 19:15:51.113245   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.116061   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.116480   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.116511   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.116715   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:51.116928   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.117084   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.117187   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:51.117354   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:51.117533   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:51.117556   40508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-860427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-860427/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-860427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:15:51.231288   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:15:51.231319   40508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:15:51.231353   40508 buildroot.go:174] setting up certificates
	I0421 19:15:51.231360   40508 provision.go:84] configureAuth start
	I0421 19:15:51.231377   40508 main.go:141] libmachine: (multinode-860427) Calling .GetMachineName
	I0421 19:15:51.231692   40508 main.go:141] libmachine: (multinode-860427) Calling .GetIP
	I0421 19:15:51.234207   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.234570   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.234599   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.234672   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.236672   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.237046   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.237080   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.237200   40508 provision.go:143] copyHostCerts
	I0421 19:15:51.237230   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:15:51.237265   40508 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:15:51.237280   40508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:15:51.237352   40508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:15:51.237423   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:15:51.237440   40508 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:15:51.237450   40508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:15:51.237482   40508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:15:51.237533   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:15:51.237555   40508 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:15:51.237562   40508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:15:51.237599   40508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:15:51.237660   40508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.multinode-860427 san=[127.0.0.1 192.168.39.100 localhost minikube multinode-860427]
	I0421 19:15:51.285371   40508 provision.go:177] copyRemoteCerts
	I0421 19:15:51.285438   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:15:51.285467   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.288066   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.288392   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.288418   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.288637   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:51.288847   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.289003   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:51.289115   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:15:51.379244   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0421 19:15:51.379312   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:15:51.410219   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0421 19:15:51.410287   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0421 19:15:51.439796   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0421 19:15:51.439876   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:15:51.468435   40508 provision.go:87] duration metric: took 237.058228ms to configureAuth
	I0421 19:15:51.468469   40508 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:15:51.468744   40508 config.go:182] Loaded profile config "multinode-860427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:15:51.468835   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:15:51.471459   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.471840   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:15:51.471858   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:15:51.472136   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:15:51.472342   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.472479   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:15:51.472557   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:15:51.472706   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:15:51.472916   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:15:51.472945   40508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:17:22.394548   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:17:22.394577   40508 machine.go:97] duration metric: took 1m31.545425039s to provisionDockerMachine
	I0421 19:17:22.394593   40508 start.go:293] postStartSetup for "multinode-860427" (driver="kvm2")
	I0421 19:17:22.394610   40508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:17:22.394652   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.394989   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:17:22.395021   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.397869   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.398438   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.398468   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.398658   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.398841   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.398986   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.399092   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:17:22.492752   40508 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:17:22.497972   40508 command_runner.go:130] > NAME=Buildroot
	I0421 19:17:22.497993   40508 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 19:17:22.497999   40508 command_runner.go:130] > ID=buildroot
	I0421 19:17:22.498006   40508 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 19:17:22.498012   40508 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 19:17:22.498383   40508 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:17:22.498408   40508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:17:22.498485   40508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:17:22.498561   40508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:17:22.498572   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /etc/ssl/certs/111752.pem
	I0421 19:17:22.498651   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:17:22.510667   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:17:22.539183   40508 start.go:296] duration metric: took 144.574371ms for postStartSetup
	I0421 19:17:22.539229   40508 fix.go:56] duration metric: took 1m31.712567169s for fixHost
	I0421 19:17:22.539250   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.541978   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.542361   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.542395   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.542625   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.542860   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.543052   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.543190   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.543374   40508 main.go:141] libmachine: Using SSH client type: native
	I0421 19:17:22.543594   40508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0421 19:17:22.543608   40508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:17:22.659572   40508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713727042.641962267
	
	I0421 19:17:22.659594   40508 fix.go:216] guest clock: 1713727042.641962267
	I0421 19:17:22.659608   40508 fix.go:229] Guest: 2024-04-21 19:17:22.641962267 +0000 UTC Remote: 2024-04-21 19:17:22.539233659 +0000 UTC m=+91.849775258 (delta=102.728608ms)
	I0421 19:17:22.659655   40508 fix.go:200] guest clock delta is within tolerance: 102.728608ms
	I0421 19:17:22.659666   40508 start.go:83] releasing machines lock for "multinode-860427", held for 1m31.833032579s
	I0421 19:17:22.659692   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.659962   40508 main.go:141] libmachine: (multinode-860427) Calling .GetIP
	I0421 19:17:22.662726   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.663162   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.663193   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.663400   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.663901   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.664077   40508 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:17:22.664166   40508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:17:22.664209   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.664309   40508 ssh_runner.go:195] Run: cat /version.json
	I0421 19:17:22.664335   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:17:22.666564   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.666864   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.666898   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.667034   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.667064   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.667286   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.667399   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:22.667428   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:22.667436   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.667540   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:17:22.667662   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:17:22.667808   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:17:22.667920   40508 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:17:22.668044   40508 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:17:22.779728   40508 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 19:17:22.780610   40508 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0421 19:17:22.780748   40508 ssh_runner.go:195] Run: systemctl --version
	I0421 19:17:22.787687   40508 command_runner.go:130] > systemd 252 (252)
	I0421 19:17:22.787764   40508 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0421 19:17:22.787838   40508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:17:22.953485   40508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 19:17:22.960600   40508 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0421 19:17:22.960937   40508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:17:22.961006   40508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:17:22.971756   40508 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0421 19:17:22.971780   40508 start.go:494] detecting cgroup driver to use...
	I0421 19:17:22.971844   40508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:17:22.991058   40508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:17:23.007285   40508 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:17:23.007343   40508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:17:23.022828   40508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:17:23.037651   40508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:17:23.194461   40508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:17:23.341107   40508 docker.go:233] disabling docker service ...
	I0421 19:17:23.341184   40508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:17:23.358370   40508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:17:23.373137   40508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:17:23.527071   40508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:17:23.669898   40508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:17:23.685705   40508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:17:23.709125   40508 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0421 19:17:23.709171   40508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 19:17:23.709219   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.721570   40508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:17:23.721646   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.733308   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.745568   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.757324   40508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:17:23.768959   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.780287   40508 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.793212   40508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:17:23.804395   40508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:17:23.814155   40508 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 19:17:23.814291   40508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:17:23.824111   40508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:17:23.967925   40508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:17:24.215968   40508 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:17:24.216039   40508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:17:24.222284   40508 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0421 19:17:24.222303   40508 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 19:17:24.222310   40508 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0421 19:17:24.222316   40508 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 19:17:24.222322   40508 command_runner.go:130] > Access: 2024-04-21 19:17:24.167203513 +0000
	I0421 19:17:24.222329   40508 command_runner.go:130] > Modify: 2024-04-21 19:17:24.089200083 +0000
	I0421 19:17:24.222335   40508 command_runner.go:130] > Change: 2024-04-21 19:17:24.089200083 +0000
	I0421 19:17:24.222339   40508 command_runner.go:130] >  Birth: -
	I0421 19:17:24.222702   40508 start.go:562] Will wait 60s for crictl version
	I0421 19:17:24.222764   40508 ssh_runner.go:195] Run: which crictl
	I0421 19:17:24.227359   40508 command_runner.go:130] > /usr/bin/crictl
	I0421 19:17:24.227708   40508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:17:24.270722   40508 command_runner.go:130] > Version:  0.1.0
	I0421 19:17:24.270755   40508 command_runner.go:130] > RuntimeName:  cri-o
	I0421 19:17:24.270760   40508 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0421 19:17:24.270766   40508 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 19:17:24.270978   40508 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:17:24.271067   40508 ssh_runner.go:195] Run: crio --version
	I0421 19:17:24.301568   40508 command_runner.go:130] > crio version 1.29.1
	I0421 19:17:24.301599   40508 command_runner.go:130] > Version:        1.29.1
	I0421 19:17:24.301613   40508 command_runner.go:130] > GitCommit:      unknown
	I0421 19:17:24.301620   40508 command_runner.go:130] > GitCommitDate:  unknown
	I0421 19:17:24.301625   40508 command_runner.go:130] > GitTreeState:   clean
	I0421 19:17:24.301634   40508 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0421 19:17:24.301639   40508 command_runner.go:130] > GoVersion:      go1.21.6
	I0421 19:17:24.301645   40508 command_runner.go:130] > Compiler:       gc
	I0421 19:17:24.301651   40508 command_runner.go:130] > Platform:       linux/amd64
	I0421 19:17:24.301658   40508 command_runner.go:130] > Linkmode:       dynamic
	I0421 19:17:24.301665   40508 command_runner.go:130] > BuildTags:      
	I0421 19:17:24.301672   40508 command_runner.go:130] >   containers_image_ostree_stub
	I0421 19:17:24.301683   40508 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0421 19:17:24.301693   40508 command_runner.go:130] >   btrfs_noversion
	I0421 19:17:24.301701   40508 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0421 19:17:24.301710   40508 command_runner.go:130] >   libdm_no_deferred_remove
	I0421 19:17:24.301717   40508 command_runner.go:130] >   seccomp
	I0421 19:17:24.301725   40508 command_runner.go:130] > LDFlags:          unknown
	I0421 19:17:24.301735   40508 command_runner.go:130] > SeccompEnabled:   true
	I0421 19:17:24.301742   40508 command_runner.go:130] > AppArmorEnabled:  false
	I0421 19:17:24.303166   40508 ssh_runner.go:195] Run: crio --version
	I0421 19:17:24.336877   40508 command_runner.go:130] > crio version 1.29.1
	I0421 19:17:24.336901   40508 command_runner.go:130] > Version:        1.29.1
	I0421 19:17:24.336909   40508 command_runner.go:130] > GitCommit:      unknown
	I0421 19:17:24.336916   40508 command_runner.go:130] > GitCommitDate:  unknown
	I0421 19:17:24.336922   40508 command_runner.go:130] > GitTreeState:   clean
	I0421 19:17:24.336931   40508 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0421 19:17:24.336937   40508 command_runner.go:130] > GoVersion:      go1.21.6
	I0421 19:17:24.336943   40508 command_runner.go:130] > Compiler:       gc
	I0421 19:17:24.336951   40508 command_runner.go:130] > Platform:       linux/amd64
	I0421 19:17:24.336958   40508 command_runner.go:130] > Linkmode:       dynamic
	I0421 19:17:24.336964   40508 command_runner.go:130] > BuildTags:      
	I0421 19:17:24.336973   40508 command_runner.go:130] >   containers_image_ostree_stub
	I0421 19:17:24.336980   40508 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0421 19:17:24.336987   40508 command_runner.go:130] >   btrfs_noversion
	I0421 19:17:24.336994   40508 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0421 19:17:24.337008   40508 command_runner.go:130] >   libdm_no_deferred_remove
	I0421 19:17:24.337014   40508 command_runner.go:130] >   seccomp
	I0421 19:17:24.337018   40508 command_runner.go:130] > LDFlags:          unknown
	I0421 19:17:24.337022   40508 command_runner.go:130] > SeccompEnabled:   true
	I0421 19:17:24.337028   40508 command_runner.go:130] > AppArmorEnabled:  false
	I0421 19:17:24.339040   40508 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 19:17:24.340427   40508 main.go:141] libmachine: (multinode-860427) Calling .GetIP
	I0421 19:17:24.342869   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:24.343165   40508 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:17:24.343186   40508 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:17:24.343378   40508 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 19:17:24.348591   40508 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0421 19:17:24.348688   40508 kubeadm.go:877] updating cluster {Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-860
427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false ist
io:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:17:24.348824   40508 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:17:24.348865   40508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:17:24.400609   40508 command_runner.go:130] > {
	I0421 19:17:24.400630   40508 command_runner.go:130] >   "images": [
	I0421 19:17:24.400635   40508 command_runner.go:130] >     {
	I0421 19:17:24.400643   40508 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0421 19:17:24.400648   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400657   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0421 19:17:24.400663   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400671   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400687   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0421 19:17:24.400701   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0421 19:17:24.400710   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400720   40508 command_runner.go:130] >       "size": "65291810",
	I0421 19:17:24.400726   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.400735   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.400755   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.400770   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.400776   40508 command_runner.go:130] >     },
	I0421 19:17:24.400782   40508 command_runner.go:130] >     {
	I0421 19:17:24.400790   40508 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0421 19:17:24.400798   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400803   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0421 19:17:24.400808   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400813   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400823   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0421 19:17:24.400838   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0421 19:17:24.400844   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400849   40508 command_runner.go:130] >       "size": "1363676",
	I0421 19:17:24.400855   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.400865   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.400871   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.400875   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.400879   40508 command_runner.go:130] >     },
	I0421 19:17:24.400882   40508 command_runner.go:130] >     {
	I0421 19:17:24.400888   40508 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0421 19:17:24.400894   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400899   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0421 19:17:24.400903   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400907   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400915   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0421 19:17:24.400924   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0421 19:17:24.400930   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400936   40508 command_runner.go:130] >       "size": "31470524",
	I0421 19:17:24.400942   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.400946   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.400951   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.400955   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.400962   40508 command_runner.go:130] >     },
	I0421 19:17:24.400965   40508 command_runner.go:130] >     {
	I0421 19:17:24.400970   40508 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0421 19:17:24.400977   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.400981   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0421 19:17:24.400985   40508 command_runner.go:130] >       ],
	I0421 19:17:24.400989   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.400998   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0421 19:17:24.401014   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0421 19:17:24.401020   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401025   40508 command_runner.go:130] >       "size": "61245718",
	I0421 19:17:24.401030   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.401035   40508 command_runner.go:130] >       "username": "nonroot",
	I0421 19:17:24.401041   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401046   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401052   40508 command_runner.go:130] >     },
	I0421 19:17:24.401056   40508 command_runner.go:130] >     {
	I0421 19:17:24.401062   40508 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0421 19:17:24.401066   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401071   40508 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0421 19:17:24.401077   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401081   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401088   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0421 19:17:24.401097   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0421 19:17:24.401101   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401105   40508 command_runner.go:130] >       "size": "150779692",
	I0421 19:17:24.401110   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401115   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401119   40508 command_runner.go:130] >       },
	I0421 19:17:24.401126   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401130   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401134   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401138   40508 command_runner.go:130] >     },
	I0421 19:17:24.401143   40508 command_runner.go:130] >     {
	I0421 19:17:24.401149   40508 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0421 19:17:24.401155   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401160   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0421 19:17:24.401164   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401168   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401175   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0421 19:17:24.401184   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0421 19:17:24.401188   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401192   40508 command_runner.go:130] >       "size": "117609952",
	I0421 19:17:24.401196   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401200   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401203   40508 command_runner.go:130] >       },
	I0421 19:17:24.401207   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401211   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401214   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401219   40508 command_runner.go:130] >     },
	I0421 19:17:24.401224   40508 command_runner.go:130] >     {
	I0421 19:17:24.401232   40508 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0421 19:17:24.401236   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401248   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0421 19:17:24.401254   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401257   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401265   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0421 19:17:24.401275   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0421 19:17:24.401278   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401282   40508 command_runner.go:130] >       "size": "112170310",
	I0421 19:17:24.401285   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401289   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401292   40508 command_runner.go:130] >       },
	I0421 19:17:24.401296   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401301   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401304   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401310   40508 command_runner.go:130] >     },
	I0421 19:17:24.401313   40508 command_runner.go:130] >     {
	I0421 19:17:24.401320   40508 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0421 19:17:24.401325   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401330   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0421 19:17:24.401333   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401338   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401351   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0421 19:17:24.401361   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0421 19:17:24.401364   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401368   40508 command_runner.go:130] >       "size": "85932953",
	I0421 19:17:24.401375   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.401381   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401385   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401389   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401393   40508 command_runner.go:130] >     },
	I0421 19:17:24.401396   40508 command_runner.go:130] >     {
	I0421 19:17:24.401402   40508 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0421 19:17:24.401405   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401410   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0421 19:17:24.401414   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401418   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401425   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0421 19:17:24.401431   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0421 19:17:24.401434   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401438   40508 command_runner.go:130] >       "size": "63026502",
	I0421 19:17:24.401441   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401445   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.401448   40508 command_runner.go:130] >       },
	I0421 19:17:24.401451   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401455   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401459   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.401462   40508 command_runner.go:130] >     },
	I0421 19:17:24.401465   40508 command_runner.go:130] >     {
	I0421 19:17:24.401470   40508 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0421 19:17:24.401474   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.401478   40508 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0421 19:17:24.401481   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401486   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.401493   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0421 19:17:24.401499   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0421 19:17:24.401502   40508 command_runner.go:130] >       ],
	I0421 19:17:24.401506   40508 command_runner.go:130] >       "size": "750414",
	I0421 19:17:24.401510   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.401514   40508 command_runner.go:130] >         "value": "65535"
	I0421 19:17:24.401518   40508 command_runner.go:130] >       },
	I0421 19:17:24.401522   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.401526   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.401532   40508 command_runner.go:130] >       "pinned": true
	I0421 19:17:24.401535   40508 command_runner.go:130] >     }
	I0421 19:17:24.401538   40508 command_runner.go:130] >   ]
	I0421 19:17:24.401541   40508 command_runner.go:130] > }
	I0421 19:17:24.401698   40508 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 19:17:24.401708   40508 crio.go:433] Images already preloaded, skipping extraction
	I0421 19:17:24.401750   40508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:17:24.438514   40508 command_runner.go:130] > {
	I0421 19:17:24.438537   40508 command_runner.go:130] >   "images": [
	I0421 19:17:24.438542   40508 command_runner.go:130] >     {
	I0421 19:17:24.438549   40508 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0421 19:17:24.438554   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438560   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0421 19:17:24.438565   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438570   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438580   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0421 19:17:24.438587   40508 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0421 19:17:24.438592   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438597   40508 command_runner.go:130] >       "size": "65291810",
	I0421 19:17:24.438601   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438606   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.438621   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438628   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438632   40508 command_runner.go:130] >     },
	I0421 19:17:24.438635   40508 command_runner.go:130] >     {
	I0421 19:17:24.438641   40508 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0421 19:17:24.438645   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438653   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0421 19:17:24.438657   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438666   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438678   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0421 19:17:24.438693   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0421 19:17:24.438701   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438707   40508 command_runner.go:130] >       "size": "1363676",
	I0421 19:17:24.438715   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438725   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.438734   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438744   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438751   40508 command_runner.go:130] >     },
	I0421 19:17:24.438756   40508 command_runner.go:130] >     {
	I0421 19:17:24.438769   40508 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0421 19:17:24.438778   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438788   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0421 19:17:24.438797   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438803   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438818   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0421 19:17:24.438829   40508 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0421 19:17:24.438835   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438841   40508 command_runner.go:130] >       "size": "31470524",
	I0421 19:17:24.438848   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438852   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.438859   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438862   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438868   40508 command_runner.go:130] >     },
	I0421 19:17:24.438872   40508 command_runner.go:130] >     {
	I0421 19:17:24.438880   40508 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0421 19:17:24.438886   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438891   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0421 19:17:24.438897   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438901   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.438910   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0421 19:17:24.438922   40508 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0421 19:17:24.438928   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438932   40508 command_runner.go:130] >       "size": "61245718",
	I0421 19:17:24.438938   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.438942   40508 command_runner.go:130] >       "username": "nonroot",
	I0421 19:17:24.438953   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.438959   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.438962   40508 command_runner.go:130] >     },
	I0421 19:17:24.438966   40508 command_runner.go:130] >     {
	I0421 19:17:24.438975   40508 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0421 19:17:24.438981   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.438986   40508 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0421 19:17:24.438992   40508 command_runner.go:130] >       ],
	I0421 19:17:24.438996   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439005   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0421 19:17:24.439014   40508 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0421 19:17:24.439019   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439024   40508 command_runner.go:130] >       "size": "150779692",
	I0421 19:17:24.439029   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439033   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439039   40508 command_runner.go:130] >       },
	I0421 19:17:24.439043   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439050   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439054   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439061   40508 command_runner.go:130] >     },
	I0421 19:17:24.439065   40508 command_runner.go:130] >     {
	I0421 19:17:24.439073   40508 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0421 19:17:24.439080   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439085   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0421 19:17:24.439091   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439095   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439104   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0421 19:17:24.439114   40508 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0421 19:17:24.439119   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439124   40508 command_runner.go:130] >       "size": "117609952",
	I0421 19:17:24.439128   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439135   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439138   40508 command_runner.go:130] >       },
	I0421 19:17:24.439145   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439149   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439155   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439158   40508 command_runner.go:130] >     },
	I0421 19:17:24.439164   40508 command_runner.go:130] >     {
	I0421 19:17:24.439170   40508 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0421 19:17:24.439184   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439192   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0421 19:17:24.439198   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439201   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439211   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0421 19:17:24.439220   40508 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0421 19:17:24.439229   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439234   40508 command_runner.go:130] >       "size": "112170310",
	I0421 19:17:24.439240   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439244   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439250   40508 command_runner.go:130] >       },
	I0421 19:17:24.439254   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439261   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439264   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439272   40508 command_runner.go:130] >     },
	I0421 19:17:24.439275   40508 command_runner.go:130] >     {
	I0421 19:17:24.439282   40508 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0421 19:17:24.439288   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439293   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0421 19:17:24.439298   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439302   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439318   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0421 19:17:24.439327   40508 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0421 19:17:24.439333   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439338   40508 command_runner.go:130] >       "size": "85932953",
	I0421 19:17:24.439344   40508 command_runner.go:130] >       "uid": null,
	I0421 19:17:24.439348   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439354   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439358   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439363   40508 command_runner.go:130] >     },
	I0421 19:17:24.439367   40508 command_runner.go:130] >     {
	I0421 19:17:24.439375   40508 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0421 19:17:24.439379   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439390   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0421 19:17:24.439398   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439405   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439420   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0421 19:17:24.439434   40508 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0421 19:17:24.439443   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439453   40508 command_runner.go:130] >       "size": "63026502",
	I0421 19:17:24.439462   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439468   40508 command_runner.go:130] >         "value": "0"
	I0421 19:17:24.439477   40508 command_runner.go:130] >       },
	I0421 19:17:24.439487   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439496   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439505   40508 command_runner.go:130] >       "pinned": false
	I0421 19:17:24.439513   40508 command_runner.go:130] >     },
	I0421 19:17:24.439517   40508 command_runner.go:130] >     {
	I0421 19:17:24.439524   40508 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0421 19:17:24.439530   40508 command_runner.go:130] >       "repoTags": [
	I0421 19:17:24.439534   40508 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0421 19:17:24.439540   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439546   40508 command_runner.go:130] >       "repoDigests": [
	I0421 19:17:24.439555   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0421 19:17:24.439567   40508 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0421 19:17:24.439574   40508 command_runner.go:130] >       ],
	I0421 19:17:24.439578   40508 command_runner.go:130] >       "size": "750414",
	I0421 19:17:24.439584   40508 command_runner.go:130] >       "uid": {
	I0421 19:17:24.439589   40508 command_runner.go:130] >         "value": "65535"
	I0421 19:17:24.439594   40508 command_runner.go:130] >       },
	I0421 19:17:24.439598   40508 command_runner.go:130] >       "username": "",
	I0421 19:17:24.439605   40508 command_runner.go:130] >       "spec": null,
	I0421 19:17:24.439609   40508 command_runner.go:130] >       "pinned": true
	I0421 19:17:24.439615   40508 command_runner.go:130] >     }
	I0421 19:17:24.439618   40508 command_runner.go:130] >   ]
	I0421 19:17:24.439624   40508 command_runner.go:130] > }
	I0421 19:17:24.439739   40508 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 19:17:24.439750   40508 cache_images.go:84] Images are preloaded, skipping loading
	I0421 19:17:24.439757   40508 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.0 crio true true} ...
	I0421 19:17:24.439853   40508 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-860427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-860427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:17:24.439912   40508 ssh_runner.go:195] Run: crio config
	I0421 19:17:24.487864   40508 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0421 19:17:24.487889   40508 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0421 19:17:24.487896   40508 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0421 19:17:24.487900   40508 command_runner.go:130] > #
	I0421 19:17:24.487907   40508 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0421 19:17:24.487914   40508 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0421 19:17:24.487920   40508 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0421 19:17:24.487929   40508 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0421 19:17:24.487933   40508 command_runner.go:130] > # reload'.
	I0421 19:17:24.487939   40508 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0421 19:17:24.487948   40508 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0421 19:17:24.487955   40508 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0421 19:17:24.487963   40508 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0421 19:17:24.487972   40508 command_runner.go:130] > [crio]
	I0421 19:17:24.487982   40508 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0421 19:17:24.487993   40508 command_runner.go:130] > # containers images, in this directory.
	I0421 19:17:24.488028   40508 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0421 19:17:24.488065   40508 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0421 19:17:24.488212   40508 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0421 19:17:24.488230   40508 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0421 19:17:24.488639   40508 command_runner.go:130] > # imagestore = ""
	I0421 19:17:24.488653   40508 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0421 19:17:24.488660   40508 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0421 19:17:24.488810   40508 command_runner.go:130] > storage_driver = "overlay"
	I0421 19:17:24.488825   40508 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0421 19:17:24.488831   40508 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0421 19:17:24.488836   40508 command_runner.go:130] > storage_option = [
	I0421 19:17:24.489032   40508 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0421 19:17:24.489081   40508 command_runner.go:130] > ]
	I0421 19:17:24.489098   40508 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0421 19:17:24.489109   40508 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0421 19:17:24.489241   40508 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0421 19:17:24.489256   40508 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0421 19:17:24.489265   40508 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0421 19:17:24.489273   40508 command_runner.go:130] > # always happen on a node reboot
	I0421 19:17:24.489749   40508 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0421 19:17:24.489770   40508 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0421 19:17:24.489780   40508 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0421 19:17:24.489791   40508 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0421 19:17:24.489921   40508 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0421 19:17:24.489940   40508 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0421 19:17:24.489958   40508 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0421 19:17:24.490332   40508 command_runner.go:130] > # internal_wipe = true
	I0421 19:17:24.490351   40508 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0421 19:17:24.490360   40508 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0421 19:17:24.490780   40508 command_runner.go:130] > # internal_repair = false
	I0421 19:17:24.490800   40508 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0421 19:17:24.490811   40508 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0421 19:17:24.490824   40508 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0421 19:17:24.491152   40508 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0421 19:17:24.491170   40508 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0421 19:17:24.491178   40508 command_runner.go:130] > [crio.api]
	I0421 19:17:24.491190   40508 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0421 19:17:24.491640   40508 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0421 19:17:24.491663   40508 command_runner.go:130] > # IP address on which the stream server will listen.
	I0421 19:17:24.491931   40508 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0421 19:17:24.491950   40508 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0421 19:17:24.491959   40508 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0421 19:17:24.492339   40508 command_runner.go:130] > # stream_port = "0"
	I0421 19:17:24.492358   40508 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0421 19:17:24.492746   40508 command_runner.go:130] > # stream_enable_tls = false
	I0421 19:17:24.492763   40508 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0421 19:17:24.492954   40508 command_runner.go:130] > # stream_idle_timeout = ""
	I0421 19:17:24.492972   40508 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0421 19:17:24.492983   40508 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0421 19:17:24.492990   40508 command_runner.go:130] > # minutes.
	I0421 19:17:24.493284   40508 command_runner.go:130] > # stream_tls_cert = ""
	I0421 19:17:24.493305   40508 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0421 19:17:24.493314   40508 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0421 19:17:24.493712   40508 command_runner.go:130] > # stream_tls_key = ""
	I0421 19:17:24.493726   40508 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0421 19:17:24.493737   40508 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0421 19:17:24.493753   40508 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0421 19:17:24.493765   40508 command_runner.go:130] > # stream_tls_ca = ""
	I0421 19:17:24.493791   40508 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0421 19:17:24.493803   40508 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0421 19:17:24.493815   40508 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0421 19:17:24.493827   40508 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0421 19:17:24.493839   40508 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0421 19:17:24.493852   40508 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0421 19:17:24.493861   40508 command_runner.go:130] > [crio.runtime]
	I0421 19:17:24.493872   40508 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0421 19:17:24.493884   40508 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0421 19:17:24.493906   40508 command_runner.go:130] > # "nofile=1024:2048"
	I0421 19:17:24.493919   40508 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0421 19:17:24.493926   40508 command_runner.go:130] > # default_ulimits = [
	I0421 19:17:24.493933   40508 command_runner.go:130] > # ]
	I0421 19:17:24.493953   40508 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0421 19:17:24.493963   40508 command_runner.go:130] > # no_pivot = false
	I0421 19:17:24.493977   40508 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0421 19:17:24.493991   40508 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0421 19:17:24.494004   40508 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0421 19:17:24.494018   40508 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0421 19:17:24.494033   40508 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0421 19:17:24.494049   40508 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0421 19:17:24.494069   40508 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0421 19:17:24.494078   40508 command_runner.go:130] > # Cgroup setting for conmon
	I0421 19:17:24.494093   40508 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0421 19:17:24.494104   40508 command_runner.go:130] > conmon_cgroup = "pod"
	I0421 19:17:24.494118   40508 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0421 19:17:24.494130   40508 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0421 19:17:24.494143   40508 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0421 19:17:24.494152   40508 command_runner.go:130] > conmon_env = [
	I0421 19:17:24.494165   40508 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0421 19:17:24.494173   40508 command_runner.go:130] > ]
	I0421 19:17:24.494183   40508 command_runner.go:130] > # Additional environment variables to set for all the
	I0421 19:17:24.494192   40508 command_runner.go:130] > # containers. These are overridden if set in the
	I0421 19:17:24.494205   40508 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0421 19:17:24.494215   40508 command_runner.go:130] > # default_env = [
	I0421 19:17:24.494222   40508 command_runner.go:130] > # ]
	I0421 19:17:24.494237   40508 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0421 19:17:24.494253   40508 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0421 19:17:24.494262   40508 command_runner.go:130] > # selinux = false
	I0421 19:17:24.494276   40508 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0421 19:17:24.494289   40508 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0421 19:17:24.494301   40508 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0421 19:17:24.494309   40508 command_runner.go:130] > # seccomp_profile = ""
	I0421 19:17:24.494324   40508 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0421 19:17:24.494336   40508 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0421 19:17:24.494350   40508 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0421 19:17:24.494361   40508 command_runner.go:130] > # which might increase security.
	I0421 19:17:24.494373   40508 command_runner.go:130] > # This option is currently deprecated,
	I0421 19:17:24.494386   40508 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0421 19:17:24.494398   40508 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0421 19:17:24.494412   40508 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0421 19:17:24.494426   40508 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0421 19:17:24.494440   40508 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0421 19:17:24.494453   40508 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0421 19:17:24.494465   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.494484   40508 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0421 19:17:24.494498   40508 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0421 19:17:24.494505   40508 command_runner.go:130] > # the cgroup blockio controller.
	I0421 19:17:24.494517   40508 command_runner.go:130] > # blockio_config_file = ""
	I0421 19:17:24.494531   40508 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0421 19:17:24.494541   40508 command_runner.go:130] > # blockio parameters.
	I0421 19:17:24.494549   40508 command_runner.go:130] > # blockio_reload = false
	I0421 19:17:24.494563   40508 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0421 19:17:24.494573   40508 command_runner.go:130] > # irqbalance daemon.
	I0421 19:17:24.494585   40508 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0421 19:17:24.494596   40508 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0421 19:17:24.494611   40508 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0421 19:17:24.494625   40508 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0421 19:17:24.494639   40508 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0421 19:17:24.494652   40508 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0421 19:17:24.494665   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.494674   40508 command_runner.go:130] > # rdt_config_file = ""
	I0421 19:17:24.494691   40508 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0421 19:17:24.494702   40508 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0421 19:17:24.494730   40508 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0421 19:17:24.494749   40508 command_runner.go:130] > # separate_pull_cgroup = ""
	I0421 19:17:24.494760   40508 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0421 19:17:24.494775   40508 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0421 19:17:24.494784   40508 command_runner.go:130] > # will be added.
	I0421 19:17:24.494792   40508 command_runner.go:130] > # default_capabilities = [
	I0421 19:17:24.494801   40508 command_runner.go:130] > # 	"CHOWN",
	I0421 19:17:24.494810   40508 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0421 19:17:24.494819   40508 command_runner.go:130] > # 	"FSETID",
	I0421 19:17:24.494826   40508 command_runner.go:130] > # 	"FOWNER",
	I0421 19:17:24.494840   40508 command_runner.go:130] > # 	"SETGID",
	I0421 19:17:24.494853   40508 command_runner.go:130] > # 	"SETUID",
	I0421 19:17:24.494863   40508 command_runner.go:130] > # 	"SETPCAP",
	I0421 19:17:24.494872   40508 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0421 19:17:24.494881   40508 command_runner.go:130] > # 	"KILL",
	I0421 19:17:24.494888   40508 command_runner.go:130] > # ]
	I0421 19:17:24.494903   40508 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0421 19:17:24.494917   40508 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0421 19:17:24.494928   40508 command_runner.go:130] > # add_inheritable_capabilities = false
	I0421 19:17:24.494939   40508 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0421 19:17:24.494951   40508 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0421 19:17:24.494959   40508 command_runner.go:130] > default_sysctls = [
	I0421 19:17:24.494976   40508 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0421 19:17:24.494985   40508 command_runner.go:130] > ]
	I0421 19:17:24.494994   40508 command_runner.go:130] > # List of devices on the host that a
	I0421 19:17:24.495007   40508 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0421 19:17:24.495017   40508 command_runner.go:130] > # allowed_devices = [
	I0421 19:17:24.495026   40508 command_runner.go:130] > # 	"/dev/fuse",
	I0421 19:17:24.495033   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495043   40508 command_runner.go:130] > # List of additional devices. specified as
	I0421 19:17:24.495056   40508 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0421 19:17:24.495068   40508 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0421 19:17:24.495081   40508 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0421 19:17:24.495092   40508 command_runner.go:130] > # additional_devices = [
	I0421 19:17:24.495100   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495110   40508 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0421 19:17:24.495120   40508 command_runner.go:130] > # cdi_spec_dirs = [
	I0421 19:17:24.495131   40508 command_runner.go:130] > # 	"/etc/cdi",
	I0421 19:17:24.495139   40508 command_runner.go:130] > # 	"/var/run/cdi",
	I0421 19:17:24.495148   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495158   40508 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0421 19:17:24.495172   40508 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0421 19:17:24.495182   40508 command_runner.go:130] > # Defaults to false.
	I0421 19:17:24.495191   40508 command_runner.go:130] > # device_ownership_from_security_context = false
	I0421 19:17:24.495205   40508 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0421 19:17:24.495218   40508 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0421 19:17:24.495227   40508 command_runner.go:130] > # hooks_dir = [
	I0421 19:17:24.495235   40508 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0421 19:17:24.495243   40508 command_runner.go:130] > # ]
	I0421 19:17:24.495254   40508 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0421 19:17:24.495268   40508 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0421 19:17:24.495279   40508 command_runner.go:130] > # its default mounts from the following two files:
	I0421 19:17:24.495287   40508 command_runner.go:130] > #
	I0421 19:17:24.495298   40508 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0421 19:17:24.495311   40508 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0421 19:17:24.495323   40508 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0421 19:17:24.495331   40508 command_runner.go:130] > #
	I0421 19:17:24.495342   40508 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0421 19:17:24.495355   40508 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0421 19:17:24.495369   40508 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0421 19:17:24.495381   40508 command_runner.go:130] > #      only add mounts it finds in this file.
	I0421 19:17:24.495390   40508 command_runner.go:130] > #
	I0421 19:17:24.495397   40508 command_runner.go:130] > # default_mounts_file = ""
	I0421 19:17:24.495408   40508 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0421 19:17:24.495423   40508 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0421 19:17:24.495433   40508 command_runner.go:130] > pids_limit = 1024
	I0421 19:17:24.495444   40508 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0421 19:17:24.495458   40508 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0421 19:17:24.495472   40508 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0421 19:17:24.495489   40508 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0421 19:17:24.495498   40508 command_runner.go:130] > # log_size_max = -1
	I0421 19:17:24.495510   40508 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0421 19:17:24.495519   40508 command_runner.go:130] > # log_to_journald = false
	I0421 19:17:24.495531   40508 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0421 19:17:24.495543   40508 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0421 19:17:24.495555   40508 command_runner.go:130] > # Path to directory for container attach sockets.
	I0421 19:17:24.495567   40508 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0421 19:17:24.495578   40508 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0421 19:17:24.495585   40508 command_runner.go:130] > # bind_mount_prefix = ""
	I0421 19:17:24.495598   40508 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0421 19:17:24.495605   40508 command_runner.go:130] > # read_only = false
	I0421 19:17:24.495618   40508 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0421 19:17:24.495632   40508 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0421 19:17:24.495642   40508 command_runner.go:130] > # live configuration reload.
	I0421 19:17:24.495649   40508 command_runner.go:130] > # log_level = "info"
	I0421 19:17:24.495662   40508 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0421 19:17:24.495674   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.495686   40508 command_runner.go:130] > # log_filter = ""
	I0421 19:17:24.495697   40508 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0421 19:17:24.495710   40508 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0421 19:17:24.495721   40508 command_runner.go:130] > # separated by comma.
	I0421 19:17:24.495738   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495748   40508 command_runner.go:130] > # uid_mappings = ""
	I0421 19:17:24.495759   40508 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0421 19:17:24.495771   40508 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0421 19:17:24.495781   40508 command_runner.go:130] > # separated by comma.
	I0421 19:17:24.495795   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495804   40508 command_runner.go:130] > # gid_mappings = ""
	I0421 19:17:24.495815   40508 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0421 19:17:24.495829   40508 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0421 19:17:24.495846   40508 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0421 19:17:24.495862   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495872   40508 command_runner.go:130] > # minimum_mappable_uid = -1
	I0421 19:17:24.495882   40508 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0421 19:17:24.495896   40508 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0421 19:17:24.495910   40508 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0421 19:17:24.495926   40508 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0421 19:17:24.495936   40508 command_runner.go:130] > # minimum_mappable_gid = -1
	I0421 19:17:24.495950   40508 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0421 19:17:24.495964   40508 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0421 19:17:24.495977   40508 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0421 19:17:24.495987   40508 command_runner.go:130] > # ctr_stop_timeout = 30
	I0421 19:17:24.495998   40508 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0421 19:17:24.496013   40508 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0421 19:17:24.496025   40508 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0421 19:17:24.496034   40508 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0421 19:17:24.496043   40508 command_runner.go:130] > drop_infra_ctr = false
	I0421 19:17:24.496053   40508 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0421 19:17:24.496065   40508 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0421 19:17:24.496080   40508 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0421 19:17:24.496090   40508 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0421 19:17:24.496104   40508 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0421 19:17:24.496117   40508 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0421 19:17:24.496129   40508 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0421 19:17:24.496138   40508 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0421 19:17:24.496148   40508 command_runner.go:130] > # shared_cpuset = ""
	I0421 19:17:24.496159   40508 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0421 19:17:24.496170   40508 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0421 19:17:24.496181   40508 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0421 19:17:24.496194   40508 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0421 19:17:24.496204   40508 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0421 19:17:24.496217   40508 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0421 19:17:24.496227   40508 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0421 19:17:24.496238   40508 command_runner.go:130] > # enable_criu_support = false
	I0421 19:17:24.496250   40508 command_runner.go:130] > # Enable/disable the generation of the container,
	I0421 19:17:24.496261   40508 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0421 19:17:24.496275   40508 command_runner.go:130] > # enable_pod_events = false
	I0421 19:17:24.496288   40508 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0421 19:17:24.496301   40508 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0421 19:17:24.496310   40508 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0421 19:17:24.496320   40508 command_runner.go:130] > # default_runtime = "runc"
	I0421 19:17:24.496330   40508 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0421 19:17:24.496346   40508 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0421 19:17:24.496365   40508 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0421 19:17:24.496376   40508 command_runner.go:130] > # creation as a file is not desired either.
	I0421 19:17:24.496395   40508 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0421 19:17:24.496406   40508 command_runner.go:130] > # the hostname is being managed dynamically.
	I0421 19:17:24.496415   40508 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0421 19:17:24.496423   40508 command_runner.go:130] > # ]
	I0421 19:17:24.496434   40508 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0421 19:17:24.496448   40508 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0421 19:17:24.496462   40508 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0421 19:17:24.496473   40508 command_runner.go:130] > # Each entry in the table should follow the format:
	I0421 19:17:24.496478   40508 command_runner.go:130] > #
	I0421 19:17:24.496489   40508 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0421 19:17:24.496501   40508 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0421 19:17:24.496527   40508 command_runner.go:130] > # runtime_type = "oci"
	I0421 19:17:24.496537   40508 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0421 19:17:24.496549   40508 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0421 19:17:24.496561   40508 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0421 19:17:24.496570   40508 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0421 19:17:24.496579   40508 command_runner.go:130] > # monitor_env = []
	I0421 19:17:24.496588   40508 command_runner.go:130] > # privileged_without_host_devices = false
	I0421 19:17:24.496598   40508 command_runner.go:130] > # allowed_annotations = []
	I0421 19:17:24.496611   40508 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0421 19:17:24.496620   40508 command_runner.go:130] > # Where:
	I0421 19:17:24.496629   40508 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0421 19:17:24.496643   40508 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0421 19:17:24.496657   40508 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0421 19:17:24.496668   40508 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0421 19:17:24.496682   40508 command_runner.go:130] > #   in $PATH.
	I0421 19:17:24.496695   40508 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0421 19:17:24.496706   40508 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0421 19:17:24.496721   40508 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0421 19:17:24.496730   40508 command_runner.go:130] > #   state.
	I0421 19:17:24.496741   40508 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0421 19:17:24.496754   40508 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0421 19:17:24.496767   40508 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0421 19:17:24.496779   40508 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0421 19:17:24.496793   40508 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0421 19:17:24.496807   40508 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0421 19:17:24.496819   40508 command_runner.go:130] > #   The currently recognized values are:
	I0421 19:17:24.496831   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0421 19:17:24.496846   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0421 19:17:24.496860   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0421 19:17:24.496873   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0421 19:17:24.496889   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0421 19:17:24.496903   40508 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0421 19:17:24.496918   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0421 19:17:24.496932   40508 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0421 19:17:24.496946   40508 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0421 19:17:24.496960   40508 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0421 19:17:24.496970   40508 command_runner.go:130] > #   deprecated option "conmon".
	I0421 19:17:24.496984   40508 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0421 19:17:24.496993   40508 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0421 19:17:24.497007   40508 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0421 19:17:24.497018   40508 command_runner.go:130] > #   should be moved to the container's cgroup
	I0421 19:17:24.497033   40508 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0421 19:17:24.497045   40508 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0421 19:17:24.497059   40508 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0421 19:17:24.497071   40508 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0421 19:17:24.497078   40508 command_runner.go:130] > #
	I0421 19:17:24.497087   40508 command_runner.go:130] > # Using the seccomp notifier feature:
	I0421 19:17:24.497095   40508 command_runner.go:130] > #
	I0421 19:17:24.497106   40508 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0421 19:17:24.497121   40508 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0421 19:17:24.497128   40508 command_runner.go:130] > #
	I0421 19:17:24.497141   40508 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0421 19:17:24.497155   40508 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0421 19:17:24.497163   40508 command_runner.go:130] > #
	I0421 19:17:24.497174   40508 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0421 19:17:24.497183   40508 command_runner.go:130] > # feature.
	I0421 19:17:24.497189   40508 command_runner.go:130] > #
	I0421 19:17:24.497202   40508 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0421 19:17:24.497215   40508 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0421 19:17:24.497229   40508 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0421 19:17:24.497242   40508 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0421 19:17:24.497257   40508 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0421 19:17:24.497264   40508 command_runner.go:130] > #
	I0421 19:17:24.497274   40508 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0421 19:17:24.497288   40508 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0421 19:17:24.497298   40508 command_runner.go:130] > #
	I0421 19:17:24.497309   40508 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0421 19:17:24.497322   40508 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0421 19:17:24.497330   40508 command_runner.go:130] > #
	I0421 19:17:24.497340   40508 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0421 19:17:24.497353   40508 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0421 19:17:24.497363   40508 command_runner.go:130] > # limitation.
	I0421 19:17:24.497371   40508 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0421 19:17:24.497381   40508 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0421 19:17:24.497390   40508 command_runner.go:130] > runtime_type = "oci"
	I0421 19:17:24.497400   40508 command_runner.go:130] > runtime_root = "/run/runc"
	I0421 19:17:24.497410   40508 command_runner.go:130] > runtime_config_path = ""
	I0421 19:17:24.497419   40508 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0421 19:17:24.497427   40508 command_runner.go:130] > monitor_cgroup = "pod"
	I0421 19:17:24.497437   40508 command_runner.go:130] > monitor_exec_cgroup = ""
	I0421 19:17:24.497447   40508 command_runner.go:130] > monitor_env = [
	I0421 19:17:24.497458   40508 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0421 19:17:24.497466   40508 command_runner.go:130] > ]
	I0421 19:17:24.497475   40508 command_runner.go:130] > privileged_without_host_devices = false
	I0421 19:17:24.497488   40508 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0421 19:17:24.497500   40508 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0421 19:17:24.497513   40508 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0421 19:17:24.497527   40508 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0421 19:17:24.497543   40508 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0421 19:17:24.497556   40508 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0421 19:17:24.497578   40508 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0421 19:17:24.497594   40508 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0421 19:17:24.497606   40508 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0421 19:17:24.497619   40508 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0421 19:17:24.497627   40508 command_runner.go:130] > # Example:
	I0421 19:17:24.497636   40508 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0421 19:17:24.497648   40508 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0421 19:17:24.497661   40508 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0421 19:17:24.497673   40508 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0421 19:17:24.497686   40508 command_runner.go:130] > # cpuset = 0
	I0421 19:17:24.497695   40508 command_runner.go:130] > # cpushares = "0-1"
	I0421 19:17:24.497702   40508 command_runner.go:130] > # Where:
	I0421 19:17:24.497710   40508 command_runner.go:130] > # The workload name is workload-type.
	I0421 19:17:24.497725   40508 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0421 19:17:24.497738   40508 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0421 19:17:24.497750   40508 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0421 19:17:24.497767   40508 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0421 19:17:24.497780   40508 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0421 19:17:24.497793   40508 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0421 19:17:24.497808   40508 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0421 19:17:24.497819   40508 command_runner.go:130] > # Default value is set to true
	I0421 19:17:24.497827   40508 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0421 19:17:24.497840   40508 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0421 19:17:24.497851   40508 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0421 19:17:24.497861   40508 command_runner.go:130] > # Default value is set to 'false'
	I0421 19:17:24.497869   40508 command_runner.go:130] > # disable_hostport_mapping = false
	I0421 19:17:24.497883   40508 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0421 19:17:24.497892   40508 command_runner.go:130] > #
	I0421 19:17:24.497903   40508 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0421 19:17:24.497917   40508 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0421 19:17:24.497931   40508 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0421 19:17:24.497942   40508 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0421 19:17:24.497950   40508 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0421 19:17:24.497954   40508 command_runner.go:130] > [crio.image]
	I0421 19:17:24.497962   40508 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0421 19:17:24.497968   40508 command_runner.go:130] > # default_transport = "docker://"
	I0421 19:17:24.497980   40508 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0421 19:17:24.497990   40508 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0421 19:17:24.497996   40508 command_runner.go:130] > # global_auth_file = ""
	I0421 19:17:24.498003   40508 command_runner.go:130] > # The image used to instantiate infra containers.
	I0421 19:17:24.498011   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.498019   40508 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0421 19:17:24.498029   40508 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0421 19:17:24.498040   40508 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0421 19:17:24.498048   40508 command_runner.go:130] > # This option supports live configuration reload.
	I0421 19:17:24.498068   40508 command_runner.go:130] > # pause_image_auth_file = ""
	I0421 19:17:24.498078   40508 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0421 19:17:24.498088   40508 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0421 19:17:24.498099   40508 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0421 19:17:24.498109   40508 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0421 19:17:24.498116   40508 command_runner.go:130] > # pause_command = "/pause"
	I0421 19:17:24.498126   40508 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0421 19:17:24.498136   40508 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0421 19:17:24.498146   40508 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0421 19:17:24.498155   40508 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0421 19:17:24.498165   40508 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0421 19:17:24.498178   40508 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0421 19:17:24.498184   40508 command_runner.go:130] > # pinned_images = [
	I0421 19:17:24.498190   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498202   40508 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0421 19:17:24.498216   40508 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0421 19:17:24.498230   40508 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0421 19:17:24.498244   40508 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0421 19:17:24.498258   40508 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0421 19:17:24.498267   40508 command_runner.go:130] > # signature_policy = ""
	I0421 19:17:24.498279   40508 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0421 19:17:24.498293   40508 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0421 19:17:24.498307   40508 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0421 19:17:24.498321   40508 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0421 19:17:24.498333   40508 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0421 19:17:24.498344   40508 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0421 19:17:24.498358   40508 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0421 19:17:24.498375   40508 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0421 19:17:24.498385   40508 command_runner.go:130] > # changing them here.
	I0421 19:17:24.498394   40508 command_runner.go:130] > # insecure_registries = [
	I0421 19:17:24.498402   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498413   40508 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0421 19:17:24.498425   40508 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0421 19:17:24.498435   40508 command_runner.go:130] > # image_volumes = "mkdir"
	I0421 19:17:24.498447   40508 command_runner.go:130] > # Temporary directory to use for storing big files
	I0421 19:17:24.498458   40508 command_runner.go:130] > # big_files_temporary_dir = ""
	I0421 19:17:24.498472   40508 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0421 19:17:24.498481   40508 command_runner.go:130] > # CNI plugins.
	I0421 19:17:24.498490   40508 command_runner.go:130] > [crio.network]
	I0421 19:17:24.498500   40508 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0421 19:17:24.498513   40508 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0421 19:17:24.498522   40508 command_runner.go:130] > # cni_default_network = ""
	I0421 19:17:24.498532   40508 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0421 19:17:24.498542   40508 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0421 19:17:24.498553   40508 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0421 19:17:24.498563   40508 command_runner.go:130] > # plugin_dirs = [
	I0421 19:17:24.498572   40508 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0421 19:17:24.498581   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498591   40508 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0421 19:17:24.498601   40508 command_runner.go:130] > [crio.metrics]
	I0421 19:17:24.498611   40508 command_runner.go:130] > # Globally enable or disable metrics support.
	I0421 19:17:24.498621   40508 command_runner.go:130] > enable_metrics = true
	I0421 19:17:24.498631   40508 command_runner.go:130] > # Specify enabled metrics collectors.
	I0421 19:17:24.498640   40508 command_runner.go:130] > # Per default all metrics are enabled.
	I0421 19:17:24.498654   40508 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0421 19:17:24.498667   40508 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0421 19:17:24.498685   40508 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0421 19:17:24.498695   40508 command_runner.go:130] > # metrics_collectors = [
	I0421 19:17:24.498704   40508 command_runner.go:130] > # 	"operations",
	I0421 19:17:24.498715   40508 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0421 19:17:24.498727   40508 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0421 19:17:24.498738   40508 command_runner.go:130] > # 	"operations_errors",
	I0421 19:17:24.498746   40508 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0421 19:17:24.498757   40508 command_runner.go:130] > # 	"image_pulls_by_name",
	I0421 19:17:24.498768   40508 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0421 19:17:24.498778   40508 command_runner.go:130] > # 	"image_pulls_failures",
	I0421 19:17:24.498785   40508 command_runner.go:130] > # 	"image_pulls_successes",
	I0421 19:17:24.498792   40508 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0421 19:17:24.498800   40508 command_runner.go:130] > # 	"image_layer_reuse",
	I0421 19:17:24.498811   40508 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0421 19:17:24.498827   40508 command_runner.go:130] > # 	"containers_oom_total",
	I0421 19:17:24.498837   40508 command_runner.go:130] > # 	"containers_oom",
	I0421 19:17:24.498845   40508 command_runner.go:130] > # 	"processes_defunct",
	I0421 19:17:24.498854   40508 command_runner.go:130] > # 	"operations_total",
	I0421 19:17:24.498865   40508 command_runner.go:130] > # 	"operations_latency_seconds",
	I0421 19:17:24.498877   40508 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0421 19:17:24.498888   40508 command_runner.go:130] > # 	"operations_errors_total",
	I0421 19:17:24.498898   40508 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0421 19:17:24.498907   40508 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0421 19:17:24.498917   40508 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0421 19:17:24.498927   40508 command_runner.go:130] > # 	"image_pulls_success_total",
	I0421 19:17:24.498935   40508 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0421 19:17:24.498945   40508 command_runner.go:130] > # 	"containers_oom_count_total",
	I0421 19:17:24.498953   40508 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0421 19:17:24.498964   40508 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0421 19:17:24.498972   40508 command_runner.go:130] > # ]
	I0421 19:17:24.498982   40508 command_runner.go:130] > # The port on which the metrics server will listen.
	I0421 19:17:24.498992   40508 command_runner.go:130] > # metrics_port = 9090
	I0421 19:17:24.499009   40508 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0421 19:17:24.499018   40508 command_runner.go:130] > # metrics_socket = ""
	I0421 19:17:24.499027   40508 command_runner.go:130] > # The certificate for the secure metrics server.
	I0421 19:17:24.499039   40508 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0421 19:17:24.499051   40508 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0421 19:17:24.499062   40508 command_runner.go:130] > # certificate on any modification event.
	I0421 19:17:24.499070   40508 command_runner.go:130] > # metrics_cert = ""
	I0421 19:17:24.499082   40508 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0421 19:17:24.499094   40508 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0421 19:17:24.499104   40508 command_runner.go:130] > # metrics_key = ""
	I0421 19:17:24.499117   40508 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0421 19:17:24.499126   40508 command_runner.go:130] > [crio.tracing]
	I0421 19:17:24.499135   40508 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0421 19:17:24.499146   40508 command_runner.go:130] > # enable_tracing = false
	I0421 19:17:24.499158   40508 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0421 19:17:24.499169   40508 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0421 19:17:24.499183   40508 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0421 19:17:24.499195   40508 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0421 19:17:24.499205   40508 command_runner.go:130] > # CRI-O NRI configuration.
	I0421 19:17:24.499214   40508 command_runner.go:130] > [crio.nri]
	I0421 19:17:24.499222   40508 command_runner.go:130] > # Globally enable or disable NRI.
	I0421 19:17:24.499232   40508 command_runner.go:130] > # enable_nri = false
	I0421 19:17:24.499240   40508 command_runner.go:130] > # NRI socket to listen on.
	I0421 19:17:24.499251   40508 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0421 19:17:24.499260   40508 command_runner.go:130] > # NRI plugin directory to use.
	I0421 19:17:24.499271   40508 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0421 19:17:24.499280   40508 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0421 19:17:24.499295   40508 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0421 19:17:24.499306   40508 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0421 19:17:24.499314   40508 command_runner.go:130] > # nri_disable_connections = false
	I0421 19:17:24.499326   40508 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0421 19:17:24.499337   40508 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0421 19:17:24.499347   40508 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0421 19:17:24.499357   40508 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0421 19:17:24.499373   40508 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0421 19:17:24.499383   40508 command_runner.go:130] > [crio.stats]
	I0421 19:17:24.499394   40508 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0421 19:17:24.499407   40508 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0421 19:17:24.499418   40508 command_runner.go:130] > # stats_collection_period = 0
	I0421 19:17:24.499450   40508 command_runner.go:130] ! time="2024-04-21 19:17:24.460431256Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0421 19:17:24.499472   40508 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0421 19:17:24.499598   40508 cni.go:84] Creating CNI manager for ""
	I0421 19:17:24.499611   40508 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 19:17:24.499622   40508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:17:24.499650   40508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-860427 NodeName:multinode-860427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 19:17:24.499823   40508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-860427"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:17:24.499895   40508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:17:24.511498   40508 command_runner.go:130] > kubeadm
	I0421 19:17:24.511522   40508 command_runner.go:130] > kubectl
	I0421 19:17:24.511529   40508 command_runner.go:130] > kubelet
	I0421 19:17:24.511547   40508 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:17:24.511589   40508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 19:17:24.523285   40508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0421 19:17:24.544057   40508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:17:24.564929   40508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0421 19:17:24.584813   40508 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0421 19:17:24.589598   40508 command_runner.go:130] > 192.168.39.100	control-plane.minikube.internal
	I0421 19:17:24.589676   40508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:17:24.737602   40508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:17:24.753202   40508 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427 for IP: 192.168.39.100
	I0421 19:17:24.753221   40508 certs.go:194] generating shared ca certs ...
	I0421 19:17:24.753240   40508 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:17:24.753508   40508 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 19:17:24.753582   40508 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 19:17:24.753599   40508 certs.go:256] generating profile certs ...
	I0421 19:17:24.753702   40508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/client.key
	I0421 19:17:24.753806   40508 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.key.9236eb8a
	I0421 19:17:24.753864   40508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.key
	I0421 19:17:24.753881   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:17:24.753908   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:17:24.753930   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:17:24.753949   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:17:24.753967   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:17:24.753989   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:17:24.754010   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:17:24.754028   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:17:24.754119   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 19:17:24.754170   40508 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 19:17:24.754186   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 19:17:24.754224   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 19:17:24.754259   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 19:17:24.754295   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 19:17:24.754364   40508 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:17:24.754408   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem -> /usr/share/ca-certificates/11175.pem
	I0421 19:17:24.754435   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> /usr/share/ca-certificates/111752.pem
	I0421 19:17:24.754457   40508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:24.755029   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:17:24.782879   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:17:24.810882   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:17:24.837927   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:17:24.864205   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 19:17:24.890126   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 19:17:24.915900   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:17:24.942499   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/multinode-860427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 19:17:24.970418   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 19:17:24.996775   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 19:17:25.022862   40508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:17:25.050020   40508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:17:25.067996   40508 ssh_runner.go:195] Run: openssl version
	I0421 19:17:25.074240   40508 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 19:17:25.074367   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 19:17:25.085746   40508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.090720   40508 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.090820   40508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.090858   40508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 19:17:25.097185   40508 command_runner.go:130] > 51391683
	I0421 19:17:25.097333   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 19:17:25.113178   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 19:17:25.141549   40508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.146528   40508 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.146857   40508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.146919   40508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 19:17:25.152858   40508 command_runner.go:130] > 3ec20f2e
	I0421 19:17:25.153089   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:17:25.164160   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:17:25.176359   40508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.181163   40508 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.181217   40508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.181259   40508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:17:25.187222   40508 command_runner.go:130] > b5213941
	I0421 19:17:25.187288   40508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:17:25.197922   40508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:17:25.202731   40508 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:17:25.202753   40508 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0421 19:17:25.202762   40508 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0421 19:17:25.202772   40508 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 19:17:25.202783   40508 command_runner.go:130] > Access: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202795   40508 command_runner.go:130] > Modify: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202807   40508 command_runner.go:130] > Change: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202819   40508 command_runner.go:130] >  Birth: 2024-04-21 19:11:01.137438851 +0000
	I0421 19:17:25.202863   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 19:17:25.209001   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.209065   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 19:17:25.215069   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.215124   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 19:17:25.220934   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.220982   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 19:17:25.227069   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.227110   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 19:17:25.232789   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.232834   40508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 19:17:25.238652   40508 command_runner.go:130] > Certificate will not expire
	I0421 19:17:25.238961   40508 kubeadm.go:391] StartCluster: {Name:multinode-860427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-860427
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.162 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:
false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:17:25.239070   40508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 19:17:25.239113   40508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:17:25.279018   40508 command_runner.go:130] > 9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273
	I0421 19:17:25.279041   40508 command_runner.go:130] > ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1
	I0421 19:17:25.279056   40508 command_runner.go:130] > 1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df
	I0421 19:17:25.279064   40508 command_runner.go:130] > 8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5
	I0421 19:17:25.279072   40508 command_runner.go:130] > c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4
	I0421 19:17:25.279081   40508 command_runner.go:130] > 9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5
	I0421 19:17:25.279092   40508 command_runner.go:130] > 8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d
	I0421 19:17:25.279110   40508 command_runner.go:130] > cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2
	I0421 19:17:25.280500   40508 cri.go:89] found id: "9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273"
	I0421 19:17:25.280521   40508 cri.go:89] found id: "ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1"
	I0421 19:17:25.280526   40508 cri.go:89] found id: "1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df"
	I0421 19:17:25.280531   40508 cri.go:89] found id: "8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5"
	I0421 19:17:25.280535   40508 cri.go:89] found id: "c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4"
	I0421 19:17:25.280542   40508 cri.go:89] found id: "9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5"
	I0421 19:17:25.280546   40508 cri.go:89] found id: "8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d"
	I0421 19:17:25.280551   40508 cri.go:89] found id: "cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2"
	I0421 19:17:25.280555   40508 cri.go:89] found id: ""
	I0421 19:17:25.280601   40508 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.677452136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb6dd19b-2011-41c7-bfdd-25a5e5ecdb39 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.678833134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8b0078c-7502-4629-b084-4f77b9dd4a66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.679397638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727278679374067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8b0078c-7502-4629-b084-4f77b9dd4a66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.679972739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c27b0f7-486a-4ad5-bc34-a8e208cfe25f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.680061505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c27b0f7-486a-4ad5-bc34-a8e208cfe25f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.680460226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99b231417b3438f8e21b45626ad6634b376c97c137340965997d66b38413ee9,PodSandboxId:fc3c3ebed26c2e8ca4b8bdf29dabbd2a9e13239c72fcc027db0fc81cb7c46e69,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713726740617910530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273,PodSandboxId:1a7205876fa91557354f0eac108930b80f8130d15eeac57a664520281909041b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713726686310763910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1,PodSandboxId:d1c9976590750bf88d5979bb4bae08b0f16eb3adce8935df4e1e2c0b2a23c163,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713726685796854467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df,PodSandboxId:b5257a5fa2e934b51b4d35599b9904d8cc3fb3076ccf81487c5a434013e93212,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713726684201436271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5,PodSandboxId:13c2da90f46588b5089448006fcab8df169f80ae9ebba54faeaabb426006ea94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713726683882883738,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-b
aa3582ae821,},Annotations:map[string]string{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4,PodSandboxId:7f29e8747a7f1182cce691d55fa10130ba7fa64851cdaf0ff0f1748b59b6db2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713726664193437387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d,PodSandboxId:494d9a87baced0443fe2ea73261277528708e2ae860baafebc100a741a3147ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713726664146128437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[
string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5,PodSandboxId:aee378e8ac0dca7363b5e397b407d13b7a9214c6c3d2d1fd3be19378d069eccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713726664160418282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 6
3419c08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2,PodSandboxId:1d6252d59eb269344679e30121670c6fb9c096f626c96796e4cf5b5c2a2e4553,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713726664089174126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c27b0f7-486a-4ad5-bc34-a8e208cfe25f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.724866528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1918dfd-f80a-448e-96f7-374489614055 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.724948656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1918dfd-f80a-448e-96f7-374489614055 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.726485380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39eee0b7-7aaa-4781-a62b-605c9827212b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.727378678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727278727353954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39eee0b7-7aaa-4781-a62b-605c9827212b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.727946060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6ed019f-e15d-4975-824d-11a025e83c16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.728000763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6ed019f-e15d-4975-824d-11a025e83c16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.728481255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99b231417b3438f8e21b45626ad6634b376c97c137340965997d66b38413ee9,PodSandboxId:fc3c3ebed26c2e8ca4b8bdf29dabbd2a9e13239c72fcc027db0fc81cb7c46e69,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713726740617910530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273,PodSandboxId:1a7205876fa91557354f0eac108930b80f8130d15eeac57a664520281909041b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713726686310763910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1,PodSandboxId:d1c9976590750bf88d5979bb4bae08b0f16eb3adce8935df4e1e2c0b2a23c163,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713726685796854467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df,PodSandboxId:b5257a5fa2e934b51b4d35599b9904d8cc3fb3076ccf81487c5a434013e93212,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713726684201436271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5,PodSandboxId:13c2da90f46588b5089448006fcab8df169f80ae9ebba54faeaabb426006ea94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713726683882883738,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-b
aa3582ae821,},Annotations:map[string]string{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4,PodSandboxId:7f29e8747a7f1182cce691d55fa10130ba7fa64851cdaf0ff0f1748b59b6db2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713726664193437387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d,PodSandboxId:494d9a87baced0443fe2ea73261277528708e2ae860baafebc100a741a3147ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713726664146128437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[
string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5,PodSandboxId:aee378e8ac0dca7363b5e397b407d13b7a9214c6c3d2d1fd3be19378d069eccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713726664160418282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 6
3419c08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2,PodSandboxId:1d6252d59eb269344679e30121670c6fb9c096f626c96796e4cf5b5c2a2e4553,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713726664089174126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6ed019f-e15d-4975-824d-11a025e83c16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.774436044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74fe5bca-0282-4ec1-b9a6-0626b84d3420 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.774537921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74fe5bca-0282-4ec1-b9a6-0626b84d3420 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.776500133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=776e8113-b285-4483-8ae8-9ea2f249e62d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.776889503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727278776861504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=776e8113-b285-4483-8ae8-9ea2f249e62d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.777481292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5156ae46-35d6-4518-ad1f-885b67fd201c name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.777541369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5156ae46-35d6-4518-ad1f-885b67fd201c name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.777898017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99b231417b3438f8e21b45626ad6634b376c97c137340965997d66b38413ee9,PodSandboxId:fc3c3ebed26c2e8ca4b8bdf29dabbd2a9e13239c72fcc027db0fc81cb7c46e69,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713726740617910530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0f66c6a810dc27b54a0c5e789d2be16d00cd224d10863e9388e91ff0f67273,PodSandboxId:1a7205876fa91557354f0eac108930b80f8130d15eeac57a664520281909041b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713726686310763910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1,PodSandboxId:d1c9976590750bf88d5979bb4bae08b0f16eb3adce8935df4e1e2c0b2a23c163,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713726685796854467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df,PodSandboxId:b5257a5fa2e934b51b4d35599b9904d8cc3fb3076ccf81487c5a434013e93212,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713726684201436271,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5,PodSandboxId:13c2da90f46588b5089448006fcab8df169f80ae9ebba54faeaabb426006ea94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713726683882883738,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-b
aa3582ae821,},Annotations:map[string]string{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4,PodSandboxId:7f29e8747a7f1182cce691d55fa10130ba7fa64851cdaf0ff0f1748b59b6db2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713726664193437387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d,PodSandboxId:494d9a87baced0443fe2ea73261277528708e2ae860baafebc100a741a3147ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713726664146128437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[
string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5,PodSandboxId:aee378e8ac0dca7363b5e397b407d13b7a9214c6c3d2d1fd3be19378d069eccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713726664160418282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 6
3419c08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2,PodSandboxId:1d6252d59eb269344679e30121670c6fb9c096f626c96796e4cf5b5c2a2e4553,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713726664089174126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5156ae46-35d6-4518-ad1f-885b67fd201c name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.794147836Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6857374-65fa-4b96-b918-07c594962fe2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.794865529Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-hk7s7,Uid:826c848b-a674-490c-9703-ac39fbc95f4c,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713727085926525143,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T19:17:31.718530383Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vs5t7,Uid:f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1713727052169700521,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T19:17:31.718537259Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2357556e-faa1-43ba-9e1a-f867acfd75fa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713727052099524881,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-21T19:17:31.718523027Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&PodSandboxMetadata{Name:kube-proxy-jg6s4,Uid:c804d5e1-21d2-488c-aa22-baa3582ae821,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1713727052096034513,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T19:17:31.718531875Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&PodSandboxMetadata{Name:kindnet-9ldwp,Uid:9fbc53d5-18bf-4b94-9431-79b4ec06767d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713727052068499822,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec06767d,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-21T19:17:31.718539612Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&PodSandboxMetadata{Name:etcd-multinode-860427,Uid:59112ccba41f96b0461632935a9f093e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713727047252172178,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: 59112ccba41f96b0461632935a9f093e,kubernetes.io/config.seen: 2024-04-21T19:17:26.712018392Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metada
ta:&PodSandboxMetadata{Name:kube-scheduler-multinode-860427,Uid:aa992db549525f97072b4a055cb3a721,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713727047249504577,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa992db549525f97072b4a055cb3a721,kubernetes.io/config.seen: 2024-04-21T19:17:26.712025789Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-860427,Uid:9ea1b971fb3b5a98bb377b76133472be,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713727047248176275,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernet
es.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9ea1b971fb3b5a98bb377b76133472be,kubernetes.io/config.seen: 2024-04-21T19:17:26.712024575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-860427,Uid:d8d947174ec267a2fd558d103fc08c08,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713727047236365423,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kuberne
tes.io/config.hash: d8d947174ec267a2fd558d103fc08c08,kubernetes.io/config.seen: 2024-04-21T19:17:26.712023175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d6857374-65fa-4b96-b918-07c594962fe2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.795542828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eae68357-2960-4dfb-a30f-f12eee2a7c4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.795777982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eae68357-2960-4dfb-a30f-f12eee2a7c4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:21:18 multinode-860427 crio[2885]: time="2024-04-21 19:21:18.795984123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f65de047388341d76909d1ccf5bae5887c5ba00d3232bb4eaa48fa2badd3524c,PodSandboxId:14a6e43c576d5eaf4bc13fc1d8aa2dad4d1cc02269106a59a6f97d26eebbc2bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713727086085916081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hk7s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 826c848b-a674-490c-9703-ac39fbc95f4c,},Annotations:map[string]string{io.kubernetes.container.hash: af2c6ffb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940,PodSandboxId:4ab17feb380c184c32a227116beffa5d1729fcd49343c99136557e2e91661a6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713727052596096302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vs5t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a7eaeb-e84d-43b3-803d-64ac0f894fa0,},Annotations:map[string]string{io.kubernetes.container.hash: 7381cd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc,PodSandboxId:dda1786cc67a6ac5566da002ab1584c30a7227f1c5411db370a4e0442843f03f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713727052476163333,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9ldwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbc53d5-18bf-4b94-9431-79b4ec
06767d,},Annotations:map[string]string{io.kubernetes.container.hash: e618f18e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1,PodSandboxId:bec8bbb43bb93801d99ad9aecff1d50869ece42dcf6ab7dc11672f2c14e242f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713727052363719796,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg6s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c804d5e1-21d2-488c-aa22-baa3582ae821,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ea0c0a63,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0709ab54213b896bc6e9570a30e8483da725bcda75ca98bc1e5bdd6969fd55a7,PodSandboxId:64265fe67f58358a1cb440d31f0fb70935d78f4ae68fef251c48b953f19e085a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727052388025418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2357556e-faa1-43ba-9e1a-f867acfd75fa,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3d758faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274,PodSandboxId:bc21dc0654e3a3fcc069f53441b360bf6dd17fff0d1ada917f9613e0ec1833e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713727047542664997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59112ccba41f96b0461632935a9f093e,},Annotations:map[string]string{io.kubernetes.container.hash: 63419c08,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec,PodSandboxId:251fa8224f7f0952230e3be157d9f3e0ab8b7ec8ddce6e11730cfb9c03a4d5e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713727047577855393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa992db549525f97072b4a055cb3a721,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58,PodSandboxId:ecf371bf70e95357135746e846c69733aab2e3f7755af1be1d1939302fbf32bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713727047541796724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ea1b971fb3b5a98bb377b76133472be,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c,PodSandboxId:dae5021f3823b8db077105ef7bdb008748731eba853dd98daceb8040d84dfdc0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713727047462367150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-860427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d947174ec267a2fd558d103fc08c08,},Annotations:map[string]string{io.kubernetes.container.hash: 729e3771,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eae68357-2960-4dfb-a30f-f12eee2a7c4b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f65de04738834       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   14a6e43c576d5       busybox-fc5497c4f-hk7s7
	3a0e0b2881434       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   4ab17feb380c1       coredns-7db6d8ff4d-vs5t7
	719444f97ed78       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   dda1786cc67a6       kindnet-9ldwp
	0709ab54213b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   64265fe67f583       storage-provisioner
	90ad4fdb1c3dc       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   bec8bbb43bb93       kube-proxy-jg6s4
	fe218a845a3aa       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   251fa8224f7f0       kube-scheduler-multinode-860427
	b322cb92ca948       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   bc21dc0654e3a       etcd-multinode-860427
	3a7048938488c       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   ecf371bf70e95       kube-controller-manager-multinode-860427
	2c542f3c92581       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   dae5021f3823b       kube-apiserver-multinode-860427
	e99b231417b34       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   fc3c3ebed26c2       busybox-fc5497c4f-hk7s7
	9b0f66c6a810d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   1a7205876fa91       storage-provisioner
	ff5d612fdfb3e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   d1c9976590750       coredns-7db6d8ff4d-vs5t7
	1b1c152114f7d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   b5257a5fa2e93       kindnet-9ldwp
	8e02f2b64b9de       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      9 minutes ago       Exited              kube-proxy                0                   13c2da90f4658       kube-proxy-jg6s4
	c5b23d24e555c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   7f29e8747a7f1       kube-scheduler-multinode-860427
	9fb589731724c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   aee378e8ac0dc       etcd-multinode-860427
	8b1fa05f21062       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   494d9a87baced       kube-apiserver-multinode-860427
	cc29f46df3151       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   1d6252d59eb26       kube-controller-manager-multinode-860427
	
	
	==> coredns [3a0e0b28814340d7bbd307ef2758c2f4d5f5cc81cd545109c0e90e6df4e81940] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58639 - 59006 "HINFO IN 6551346853364553102.4752810316306585780. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026259095s
	
	
	==> coredns [ff5d612fdfb3ede58fd1f817d519c2a0f60fd4e21ac3786ff1866f5be689b9e1] <==
	[INFO] 10.244.0.3:58767 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001786061s
	[INFO] 10.244.0.3:37173 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00031116s
	[INFO] 10.244.0.3:59498 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071934s
	[INFO] 10.244.0.3:60013 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001165337s
	[INFO] 10.244.0.3:41860 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005795s
	[INFO] 10.244.0.3:60932 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095114s
	[INFO] 10.244.0.3:45075 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080404s
	[INFO] 10.244.1.2:35677 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249307s
	[INFO] 10.244.1.2:49702 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200978s
	[INFO] 10.244.1.2:58015 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111852s
	[INFO] 10.244.1.2:54380 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000260643s
	[INFO] 10.244.0.3:46013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012317s
	[INFO] 10.244.0.3:40454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008508s
	[INFO] 10.244.0.3:47947 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076971s
	[INFO] 10.244.0.3:37172 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000666s
	[INFO] 10.244.1.2:43856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163894s
	[INFO] 10.244.1.2:37507 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000690623s
	[INFO] 10.244.1.2:59238 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139066s
	[INFO] 10.244.1.2:60046 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159981s
	[INFO] 10.244.0.3:54205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080256s
	[INFO] 10.244.0.3:54530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000049187s
	[INFO] 10.244.0.3:50154 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000030665s
	[INFO] 10.244.0.3:52243 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00002735s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-860427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-860427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_11_10_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:11:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860427
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:21:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:17:31 +0000   Sun, 21 Apr 2024 19:11:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    multinode-860427
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bba815c2ad94d64bea00a33989824af
	  System UUID:                6bba815c-2ad9-4d64-bea0-0a33989824af
	  Boot ID:                    76a8137b-dbd7-47e7-bb06-0eb11c9e8461
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hk7s7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 coredns-7db6d8ff4d-vs5t7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m56s
	  kube-system                 etcd-multinode-860427                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-9ldwp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m57s
	  kube-system                 kube-apiserver-multinode-860427             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-860427    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-jg6s4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-scheduler-multinode-860427             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-860427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-860427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-860427 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m57s                  node-controller  Node multinode-860427 event: Registered Node multinode-860427 in Controller
	  Normal  NodeReady                9m54s                  kubelet          Node multinode-860427 status is now: NodeReady
	  Normal  Starting                 3m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 3m53s)  kubelet          Node multinode-860427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 3m53s)  kubelet          Node multinode-860427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x7 over 3m53s)  kubelet          Node multinode-860427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s                  node-controller  Node multinode-860427 event: Registered Node multinode-860427 in Controller
	
	
	Name:               multinode-860427-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860427-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-860427
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_18_12_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:18:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860427-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:18:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:19:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:19:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:19:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Apr 2024 19:18:42 +0000   Sun, 21 Apr 2024 19:19:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    multinode-860427-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2825fddabdeb4c06a5bb08fe55061b6a
	  System UUID:                2825fdda-bdeb-4c06-a5bb-08fe55061b6a
	  Boot ID:                    6f25b561-af0f-4196-8630-ca5efeabc205
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bsh66    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kindnet-nw7qf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m14s
	  kube-system                 kube-proxy-qwtz4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m3s                   kube-proxy       
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m14s (x2 over 9m14s)  kubelet          Node multinode-860427-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m14s (x2 over 9m14s)  kubelet          Node multinode-860427-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m14s (x2 over 9m14s)  kubelet          Node multinode-860427-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m4s                   kubelet          Node multinode-860427-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)    kubelet          Node multinode-860427-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)    kubelet          Node multinode-860427-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)    kubelet          Node multinode-860427-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m59s                  kubelet          Node multinode-860427-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-860427-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +10.968699] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.063420] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061360] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.170002] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.137810] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.329562] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.789014] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.064580] kauditd_printk_skb: 130 callbacks suppressed
	[Apr21 19:11] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +6.569054] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.083339] kauditd_printk_skb: 97 callbacks suppressed
	[ +13.734722] systemd-fstab-generator[1476]: Ignoring "noauto" option for root device
	[  +0.137373] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.229048] kauditd_printk_skb: 82 callbacks suppressed
	[Apr21 19:17] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.154820] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.185743] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.145363] systemd-fstab-generator[2842]: Ignoring "noauto" option for root device
	[  +0.297750] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +0.754231] systemd-fstab-generator[2971]: Ignoring "noauto" option for root device
	[  +1.855347] systemd-fstab-generator[3097]: Ignoring "noauto" option for root device
	[  +5.770520] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.137504] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.670673] systemd-fstab-generator[3918]: Ignoring "noauto" option for root device
	[Apr21 19:18] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [9fb589731724c2c40b92509c30b076684405743c4d8a62976644d10788b93df5] <==
	{"level":"info","ts":"2024-04-21T19:11:04.666459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-04-21T19:11:04.666743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:11:04.67698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T19:11:04.682599Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:11:04.682702Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:11:04.682783Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:11:04.682959Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:11:04.682994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-04-21T19:11:54.25539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.984266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176521758976184188 > lease_revoke:<id:1e348f0211af0f47>","response":"size:27"}
	{"level":"warn","ts":"2024-04-21T19:12:05.45912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.380032ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176521758976184241 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-860427-m02.17c8616079a4e2d9\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-860427-m02.17c8616079a4e2d9\" value_size:642 lease:2176521758976183616 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-21T19:12:05.459658Z","caller":"traceutil/trace.go:171","msg":"trace[1465477537] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"234.978624ms","start":"2024-04-21T19:12:05.224662Z","end":"2024-04-21T19:12:05.45964Z","steps":["trace[1465477537] 'process raft request'  (duration: 72.630033ms)","trace[1465477537] 'compare'  (duration: 160.967897ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T19:12:05.459817Z","caller":"traceutil/trace.go:171","msg":"trace[66219805] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"182.837644ms","start":"2024-04-21T19:12:05.276882Z","end":"2024-04-21T19:12:05.45972Z","steps":["trace[66219805] 'process raft request'  (duration: 182.597618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T19:12:53.681442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.473442ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176521758976184648 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-860427-m03.17c8616bb3005bae\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-860427-m03.17c8616bb3005bae\" value_size:642 lease:2176521758976184317 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-21T19:12:53.681814Z","caller":"traceutil/trace.go:171","msg":"trace[1234211206] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"246.188608ms","start":"2024-04-21T19:12:53.435583Z","end":"2024-04-21T19:12:53.681772Z","steps":["trace[1234211206] 'process raft request'  (duration: 74.304647ms)","trace[1234211206] 'compare'  (duration: 171.276929ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T19:12:53.682562Z","caller":"traceutil/trace.go:171","msg":"trace[10322066] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"179.030374ms","start":"2024-04-21T19:12:53.503517Z","end":"2024-04-21T19:12:53.682548Z","steps":["trace[10322066] 'process raft request'  (duration: 178.144029ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T19:15:51.608493Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-21T19:15:51.608651Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-860427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"warn","ts":"2024-04-21T19:15:51.608858Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-21T19:15:51.60895Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-21T19:15:51.662405Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-21T19:15:51.662462Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-21T19:15:51.662533Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"info","ts":"2024-04-21T19:15:51.665869Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:15:51.666328Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:15:51.666366Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-860427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [b322cb92ca9488a4839efa73cb1f35c271ec148135cff52250905b3cd89c5274] <==
	{"level":"info","ts":"2024-04-21T19:17:28.338031Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T19:17:28.338066Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T19:17:28.344147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2024-04-21T19:17:28.345465Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-04-21T19:17:28.347542Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:17:28.347671Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:17:28.358916Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-21T19:17:28.362457Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3276445ff8d31e34","initial-advertise-peer-urls":["https://192.168.39.100:2380"],"listen-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.100:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-21T19:17:28.362832Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T19:17:28.360493Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:17:28.365282Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-04-21T19:17:29.94487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-21T19:17:29.94491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-21T19:17:29.944955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2024-04-21T19:17:29.944969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.944975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.944983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.944993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-04-21T19:17:29.950393Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:multinode-860427 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:17:29.950475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:17:29.950632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:17:29.95119Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:17:29.95129Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T19:17:29.95312Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-04-21T19:17:29.953458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:21:19 up 10 min,  0 users,  load average: 0.02, 0.10, 0.08
	Linux multinode-860427 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1b1c152114f7d9c24ea8494c7a53a6f1e3425447e56602eeaef584686c44d7df] <==
	I0421 19:15:05.334649       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:15.341520       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:15.341565       1 main.go:227] handling current node
	I0421 19:15:15.341576       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:15.341582       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:15.341698       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:15.341705       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:25.353601       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:25.353753       1 main.go:227] handling current node
	I0421 19:15:25.353778       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:25.353784       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:25.353908       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:25.353944       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:35.358583       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:35.358714       1 main.go:227] handling current node
	I0421 19:15:35.358755       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:35.358780       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:35.358915       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:35.358935       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	I0421 19:15:45.369434       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:15:45.369679       1 main.go:227] handling current node
	I0421 19:15:45.369710       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:15:45.369808       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:15:45.370145       1 main.go:223] Handling node with IPs: map[192.168.39.162:{}]
	I0421 19:15:45.370173       1 main.go:250] Node multinode-860427-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [719444f97ed788f3fd793319c8643ab3aa69a49d53386bcd6c1948cbe0f620bc] <==
	I0421 19:20:13.552762       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:20:23.558442       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:20:23.558497       1 main.go:227] handling current node
	I0421 19:20:23.558512       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:20:23.558521       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:20:33.562495       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:20:33.562614       1 main.go:227] handling current node
	I0421 19:20:33.562645       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:20:33.562750       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:20:43.575870       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:20:43.576043       1 main.go:227] handling current node
	I0421 19:20:43.576075       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:20:43.576295       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:20:53.584318       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:20:53.584365       1 main.go:227] handling current node
	I0421 19:20:53.584379       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:20:53.584385       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:21:03.594124       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:21:03.594520       1 main.go:227] handling current node
	I0421 19:21:03.594617       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:21:03.594657       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	I0421 19:21:13.600964       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0421 19:21:13.601086       1 main.go:227] handling current node
	I0421 19:21:13.601181       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0421 19:21:13.601318       1 main.go:250] Node multinode-860427-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2c542f3c92581df5c1c37ed23dd83342c4dea74df2786abc84ef5a02c4e0297c] <==
	I0421 19:17:31.453427       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 19:17:31.453472       1 policy_source.go:224] refreshing policies
	I0421 19:17:31.464520       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0421 19:17:31.466811       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0421 19:17:31.466849       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0421 19:17:31.468112       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0421 19:17:31.468167       1 aggregator.go:165] initial CRD sync complete...
	I0421 19:17:31.468192       1 autoregister_controller.go:141] Starting autoregister controller
	I0421 19:17:31.468275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0421 19:17:31.468298       1 cache.go:39] Caches are synced for autoregister controller
	I0421 19:17:31.469357       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 19:17:31.469453       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 19:17:31.469887       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0421 19:17:31.469944       1 shared_informer.go:320] Caches are synced for configmaps
	I0421 19:17:31.472329       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 19:17:31.474726       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0421 19:17:31.480911       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0421 19:17:32.288938       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0421 19:17:33.842612       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0421 19:17:33.979515       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0421 19:17:33.993119       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0421 19:17:34.058927       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0421 19:17:34.065581       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0421 19:17:44.329393       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 19:17:44.359444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8b1fa05f21062ee151618f73219e20021baced5688b21c0654a7d0ca0380da4d] <==
	W0421 19:15:51.636479       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636530       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636582       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636640       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636691       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.636741       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.637129       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.639578       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.639847       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.640529       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.640599       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.640751       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641029       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641099       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641152       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641319       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641381       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641438       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641568       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641618       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641662       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641710       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641755       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641797       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0421 19:15:51.641840       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3a7048938488c9c962cd9ae67607e33be03d0b7a3ca51f27633cc3c553198c58] <==
	I0421 19:18:12.027844       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m02\" does not exist"
	I0421 19:18:12.037703       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m02" podCIDRs=["10.244.1.0/24"]
	I0421 19:18:12.948535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.746µs"
	I0421 19:18:12.963925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.287µs"
	I0421 19:18:13.011738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.837µs"
	I0421 19:18:13.021361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.441µs"
	I0421 19:18:13.027752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.757µs"
	I0421 19:18:14.645744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.131µs"
	I0421 19:18:20.540551       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:18:20.561022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.444µs"
	I0421 19:18:20.577678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.807µs"
	I0421 19:18:24.415853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.162596ms"
	I0421 19:18:24.416464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.643µs"
	I0421 19:18:41.196324       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:18:42.200886       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m03\" does not exist"
	I0421 19:18:42.200979       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:18:42.211775       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m03" podCIDRs=["10.244.2.0/24"]
	I0421 19:18:51.470889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:18:57.243378       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:19:34.404499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.994484ms"
	I0421 19:19:34.406894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.563µs"
	I0421 19:20:04.336383       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wtv4m"
	I0421 19:20:04.369255       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wtv4m"
	I0421 19:20:04.369361       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rpj7t"
	I0421 19:20:04.398887       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rpj7t"
	
	
	==> kube-controller-manager [cc29f46df31511bcfd35ec4edae678735a02970c6de4b1dff97883a106f59ef2] <==
	I0421 19:11:35.686976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.477µs"
	I0421 19:12:05.462722       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m02\" does not exist"
	I0421 19:12:05.494736       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m02" podCIDRs=["10.244.1.0/24"]
	I0421 19:12:07.197302       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-860427-m02"
	I0421 19:12:15.225082       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:12:17.678609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.643223ms"
	I0421 19:12:17.698828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.156317ms"
	I0421 19:12:17.698889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.01µs"
	I0421 19:12:21.165319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.556944ms"
	I0421 19:12:21.165613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.951µs"
	I0421 19:12:21.280925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.88178ms"
	I0421 19:12:21.281161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.518µs"
	I0421 19:12:53.684013       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m03\" does not exist"
	I0421 19:12:53.684750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:12:53.716789       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m03" podCIDRs=["10.244.2.0/24"]
	I0421 19:12:57.218387       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-860427-m03"
	I0421 19:13:03.976691       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:13:34.816612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:13:35.815353       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:13:35.817927       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-860427-m03\" does not exist"
	I0421 19:13:35.838678       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-860427-m03" podCIDRs=["10.244.3.0/24"]
	I0421 19:13:45.003553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:14:27.276100       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-860427-m02"
	I0421 19:14:32.375762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.872105ms"
	I0421 19:14:32.375925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.089µs"
	
	
	==> kube-proxy [8e02f2b64b9def606be6a2ca51570afc4a0ee1d39d1f725a8a8544126c353dd5] <==
	I0421 19:11:24.264001       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:11:24.279167       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0421 19:11:24.358020       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:11:24.358085       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:11:24.358107       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:11:24.367090       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:11:24.367405       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:11:24.367443       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:11:24.368764       1 config.go:192] "Starting service config controller"
	I0421 19:11:24.368807       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:11:24.368831       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:11:24.368836       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:11:24.369129       1 config.go:319] "Starting node config controller"
	I0421 19:11:24.369169       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:11:24.471385       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:11:24.471442       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 19:11:24.471681       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [90ad4fdb1c3dcb113006444178aa6679d058a4a486659152635cca0645d5f1c1] <==
	I0421 19:17:32.817685       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:17:32.844730       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0421 19:17:32.945421       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:17:32.945542       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:17:32.945627       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:17:32.950727       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:17:32.951168       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:17:32.951313       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:17:32.952676       1 config.go:192] "Starting service config controller"
	I0421 19:17:32.955392       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:17:32.955539       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:17:32.955666       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:17:32.955690       1 config.go:319] "Starting node config controller"
	I0421 19:17:32.955764       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:17:33.055902       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 19:17:33.056003       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:17:33.057416       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c5b23d24e555c7c1d03e41b3a996e1a63a75cdfe24e320c0ee22d3852fa703c4] <==
	E0421 19:11:07.007339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:11:07.007727       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:07.007846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:07.007997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:07.008105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:07.013361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:11:07.013481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:11:07.829032       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:11:07.829094       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:11:07.830068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:11:07.830250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:11:07.928365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:11:07.928503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:11:07.963783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:07.963813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:08.057077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 19:11:08.057107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 19:11:08.064099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 19:11:08.064286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 19:11:08.207119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:11:08.207277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:11:08.244181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 19:11:08.244285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0421 19:11:10.572888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0421 19:15:51.618748       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fe218a845a3aa23d4d3ec0823772ff54ff89706d902c0edf0029bd56804237ec] <==
	I0421 19:17:28.713684       1 serving.go:380] Generated self-signed cert in-memory
	W0421 19:17:31.358538       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0421 19:17:31.359156       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:17:31.359465       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0421 19:17:31.359643       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0421 19:17:31.389933       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0421 19:17:31.389986       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:17:31.391858       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0421 19:17:31.391910       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0421 19:17:31.391897       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0421 19:17:31.391915       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0421 19:17:31.492415       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.721467    3104 topology_manager.go:215] "Topology Admit Handler" podUID="2357556e-faa1-43ba-9e1a-f867acfd75fa" podNamespace="kube-system" podName="storage-provisioner"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.721735    3104 topology_manager.go:215] "Topology Admit Handler" podUID="826c848b-a674-490c-9703-ac39fbc95f4c" podNamespace="default" podName="busybox-fc5497c4f-hk7s7"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.734529    3104 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.771763    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c804d5e1-21d2-488c-aa22-baa3582ae821-lib-modules\") pod \"kube-proxy-jg6s4\" (UID: \"c804d5e1-21d2-488c-aa22-baa3582ae821\") " pod="kube-system/kube-proxy-jg6s4"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.771986    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fbc53d5-18bf-4b94-9431-79b4ec06767d-lib-modules\") pod \"kindnet-9ldwp\" (UID: \"9fbc53d5-18bf-4b94-9431-79b4ec06767d\") " pod="kube-system/kindnet-9ldwp"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772266    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2357556e-faa1-43ba-9e1a-f867acfd75fa-tmp\") pod \"storage-provisioner\" (UID: \"2357556e-faa1-43ba-9e1a-f867acfd75fa\") " pod="kube-system/storage-provisioner"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772390    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9fbc53d5-18bf-4b94-9431-79b4ec06767d-cni-cfg\") pod \"kindnet-9ldwp\" (UID: \"9fbc53d5-18bf-4b94-9431-79b4ec06767d\") " pod="kube-system/kindnet-9ldwp"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772566    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fbc53d5-18bf-4b94-9431-79b4ec06767d-xtables-lock\") pod \"kindnet-9ldwp\" (UID: \"9fbc53d5-18bf-4b94-9431-79b4ec06767d\") " pod="kube-system/kindnet-9ldwp"
	Apr 21 19:17:31 multinode-860427 kubelet[3104]: I0421 19:17:31.772724    3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c804d5e1-21d2-488c-aa22-baa3582ae821-xtables-lock\") pod \"kube-proxy-jg6s4\" (UID: \"c804d5e1-21d2-488c-aa22-baa3582ae821\") " pod="kube-system/kube-proxy-jg6s4"
	Apr 21 19:17:39 multinode-860427 kubelet[3104]: I0421 19:17:39.501272    3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 21 19:18:26 multinode-860427 kubelet[3104]: E0421 19:18:26.825164    3104 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:18:26 multinode-860427 kubelet[3104]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:19:26 multinode-860427 kubelet[3104]: E0421 19:19:26.824856    3104 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:19:26 multinode-860427 kubelet[3104]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:19:26 multinode-860427 kubelet[3104]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:19:26 multinode-860427 kubelet[3104]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:19:26 multinode-860427 kubelet[3104]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:20:26 multinode-860427 kubelet[3104]: E0421 19:20:26.823988    3104 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:20:26 multinode-860427 kubelet[3104]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:20:26 multinode-860427 kubelet[3104]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:20:26 multinode-860427 kubelet[3104]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:20:26 multinode-860427 kubelet[3104]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:21:18.308297   42812 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18702-3854/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-860427 -n multinode-860427
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-860427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.68s)

                                                
                                    
x
+
TestPreload (301.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-643468 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0421 19:25:52.256740   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 19:26:09.209331   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 19:29:06.205399   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-643468 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m36.758619628s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-643468 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-643468 image pull gcr.io/k8s-minikube/busybox: (2.970857348s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-643468
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-643468: (7.307290692s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-643468 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-643468 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.904105489s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-643468 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-04-21 19:30:28.322494434 +0000 UTC m=+4136.846629889
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-643468 -n test-preload-643468
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-643468 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-643468 logs -n 25: (1.172801423s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427 sudo cat                                       | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m03_multinode-860427.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt                       | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m02:/home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n                                                                 | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | multinode-860427-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-860427 ssh -n multinode-860427-m02 sudo cat                                   | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | /home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-860427 node stop m03                                                          | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	| node    | multinode-860427 node start                                                             | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-860427                                                                | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC |                     |
	| stop    | -p multinode-860427                                                                     | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:13 UTC |                     |
	| start   | -p multinode-860427                                                                     | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:15 UTC | 21 Apr 24 19:18 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-860427                                                                | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:18 UTC |                     |
	| node    | multinode-860427 node delete                                                            | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:18 UTC | 21 Apr 24 19:18 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-860427 stop                                                                   | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:18 UTC |                     |
	| start   | -p multinode-860427                                                                     | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:21 UTC | 21 Apr 24 19:24 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-860427                                                                | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:24 UTC |                     |
	| start   | -p multinode-860427-m02                                                                 | multinode-860427-m02 | jenkins | v1.33.0 | 21 Apr 24 19:24 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-860427-m03                                                                 | multinode-860427-m03 | jenkins | v1.33.0 | 21 Apr 24 19:24 UTC | 21 Apr 24 19:25 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-860427                                                                 | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:25 UTC |                     |
	| delete  | -p multinode-860427-m03                                                                 | multinode-860427-m03 | jenkins | v1.33.0 | 21 Apr 24 19:25 UTC | 21 Apr 24 19:25 UTC |
	| delete  | -p multinode-860427                                                                     | multinode-860427     | jenkins | v1.33.0 | 21 Apr 24 19:25 UTC | 21 Apr 24 19:25 UTC |
	| start   | -p test-preload-643468                                                                  | test-preload-643468  | jenkins | v1.33.0 | 21 Apr 24 19:25 UTC | 21 Apr 24 19:29 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-643468 image pull                                                          | test-preload-643468  | jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:29 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-643468                                                                  | test-preload-643468  | jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:29 UTC |
	| start   | -p test-preload-643468                                                                  | test-preload-643468  | jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:30 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-643468 image list                                                          | test-preload-643468  | jenkins | v1.33.0 | 21 Apr 24 19:30 UTC | 21 Apr 24 19:30 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:29:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:29:17.228755   45696 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:29:17.228876   45696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:29:17.228888   45696 out.go:304] Setting ErrFile to fd 2...
	I0421 19:29:17.228894   45696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:29:17.229095   45696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:29:17.229630   45696 out.go:298] Setting JSON to false
	I0421 19:29:17.230550   45696 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4255,"bootTime":1713723502,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:29:17.230612   45696 start.go:139] virtualization: kvm guest
	I0421 19:29:17.233164   45696 out.go:177] * [test-preload-643468] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:29:17.234619   45696 notify.go:220] Checking for updates...
	I0421 19:29:17.234630   45696 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:29:17.236168   45696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:29:17.237594   45696 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:29:17.238806   45696 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:29:17.240061   45696 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:29:17.241376   45696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:29:17.243152   45696 config.go:182] Loaded profile config "test-preload-643468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0421 19:29:17.243809   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:29:17.243887   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:29:17.258655   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0421 19:29:17.259073   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:29:17.259572   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:29:17.259593   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:29:17.259897   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:29:17.260043   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:17.262042   45696 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0421 19:29:17.263279   45696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:29:17.263599   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:29:17.263642   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:29:17.278686   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0421 19:29:17.279058   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:29:17.279504   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:29:17.279524   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:29:17.279839   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:29:17.280030   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:17.315808   45696 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:29:17.317133   45696 start.go:297] selected driver: kvm2
	I0421 19:29:17.317151   45696 start.go:901] validating driver "kvm2" against &{Name:test-preload-643468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-643468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:29:17.317280   45696 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:29:17.318001   45696 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:29:17.318101   45696 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:29:17.333363   45696 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:29:17.333686   45696 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:29:17.333756   45696 cni.go:84] Creating CNI manager for ""
	I0421 19:29:17.333766   45696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:29:17.333826   45696 start.go:340] cluster config:
	{Name:test-preload-643468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-643468 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:29:17.333922   45696 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:29:17.336483   45696 out.go:177] * Starting "test-preload-643468" primary control-plane node in "test-preload-643468" cluster
	I0421 19:29:17.337651   45696 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0421 19:29:17.454142   45696 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0421 19:29:17.454177   45696 cache.go:56] Caching tarball of preloaded images
	I0421 19:29:17.454306   45696 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0421 19:29:17.456118   45696 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0421 19:29:17.457332   45696 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0421 19:29:17.566535   45696 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0421 19:29:28.879399   45696 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0421 19:29:28.879492   45696 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0421 19:29:29.715453   45696 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0421 19:29:29.715593   45696 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/config.json ...
	I0421 19:29:29.715882   45696 start.go:360] acquireMachinesLock for test-preload-643468: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:29:29.715963   45696 start.go:364] duration metric: took 53.953µs to acquireMachinesLock for "test-preload-643468"
	I0421 19:29:29.715981   45696 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:29:29.715990   45696 fix.go:54] fixHost starting: 
	I0421 19:29:29.716419   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:29:29.716467   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:29:29.730617   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0421 19:29:29.731004   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:29:29.731538   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:29:29.731560   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:29:29.731945   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:29:29.732138   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:29.732292   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetState
	I0421 19:29:29.733659   45696 fix.go:112] recreateIfNeeded on test-preload-643468: state=Stopped err=<nil>
	I0421 19:29:29.733697   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	W0421 19:29:29.733862   45696 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:29:29.735945   45696 out.go:177] * Restarting existing kvm2 VM for "test-preload-643468" ...
	I0421 19:29:29.737243   45696 main.go:141] libmachine: (test-preload-643468) Calling .Start
	I0421 19:29:29.737405   45696 main.go:141] libmachine: (test-preload-643468) Ensuring networks are active...
	I0421 19:29:29.738123   45696 main.go:141] libmachine: (test-preload-643468) Ensuring network default is active
	I0421 19:29:29.738400   45696 main.go:141] libmachine: (test-preload-643468) Ensuring network mk-test-preload-643468 is active
	I0421 19:29:29.738762   45696 main.go:141] libmachine: (test-preload-643468) Getting domain xml...
	I0421 19:29:29.739388   45696 main.go:141] libmachine: (test-preload-643468) Creating domain...
	I0421 19:29:30.916440   45696 main.go:141] libmachine: (test-preload-643468) Waiting to get IP...
	I0421 19:29:30.917385   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:30.917759   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:30.917848   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:30.917764   45779 retry.go:31] will retry after 258.528313ms: waiting for machine to come up
	I0421 19:29:31.178321   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:31.178792   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:31.178824   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:31.178773   45779 retry.go:31] will retry after 295.039528ms: waiting for machine to come up
	I0421 19:29:31.475238   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:31.475811   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:31.475836   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:31.475769   45779 retry.go:31] will retry after 469.671015ms: waiting for machine to come up
	I0421 19:29:31.947567   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:31.948045   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:31.948084   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:31.947985   45779 retry.go:31] will retry after 587.864262ms: waiting for machine to come up
	I0421 19:29:32.537776   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:32.538148   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:32.538185   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:32.538111   45779 retry.go:31] will retry after 530.052816ms: waiting for machine to come up
	I0421 19:29:33.069935   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:33.070390   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:33.070431   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:33.070309   45779 retry.go:31] will retry after 756.411007ms: waiting for machine to come up
	I0421 19:29:33.828159   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:33.828462   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:33.828495   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:33.828414   45779 retry.go:31] will retry after 1.150686032s: waiting for machine to come up
	I0421 19:29:34.981085   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:34.981526   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:34.981552   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:34.981479   45779 retry.go:31] will retry after 1.116804201s: waiting for machine to come up
	I0421 19:29:36.099759   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:36.100260   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:36.100287   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:36.100191   45779 retry.go:31] will retry after 1.281438788s: waiting for machine to come up
	I0421 19:29:37.383521   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:37.383917   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:37.383951   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:37.383875   45779 retry.go:31] will retry after 1.607652527s: waiting for machine to come up
	I0421 19:29:38.993706   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:38.994129   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:38.994156   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:38.994081   45779 retry.go:31] will retry after 1.821161315s: waiting for machine to come up
	I0421 19:29:40.817787   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:40.818338   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:40.818369   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:40.818284   45779 retry.go:31] will retry after 2.500617627s: waiting for machine to come up
	I0421 19:29:43.321932   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:43.322446   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:43.322476   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:43.322418   45779 retry.go:31] will retry after 2.992566873s: waiting for machine to come up
	I0421 19:29:46.317327   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:46.317684   45696 main.go:141] libmachine: (test-preload-643468) DBG | unable to find current IP address of domain test-preload-643468 in network mk-test-preload-643468
	I0421 19:29:46.317708   45696 main.go:141] libmachine: (test-preload-643468) DBG | I0421 19:29:46.317644   45779 retry.go:31] will retry after 3.990110765s: waiting for machine to come up
	I0421 19:29:50.309779   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.310332   45696 main.go:141] libmachine: (test-preload-643468) Found IP for machine: 192.168.39.171
	I0421 19:29:50.310352   45696 main.go:141] libmachine: (test-preload-643468) Reserving static IP address...
	I0421 19:29:50.310369   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has current primary IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.310808   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "test-preload-643468", mac: "52:54:00:02:73:94", ip: "192.168.39.171"} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.310826   45696 main.go:141] libmachine: (test-preload-643468) Reserved static IP address: 192.168.39.171
	I0421 19:29:50.310838   45696 main.go:141] libmachine: (test-preload-643468) DBG | skip adding static IP to network mk-test-preload-643468 - found existing host DHCP lease matching {name: "test-preload-643468", mac: "52:54:00:02:73:94", ip: "192.168.39.171"}
	I0421 19:29:50.310850   45696 main.go:141] libmachine: (test-preload-643468) DBG | Getting to WaitForSSH function...
	I0421 19:29:50.310858   45696 main.go:141] libmachine: (test-preload-643468) Waiting for SSH to be available...
	I0421 19:29:50.313170   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.313476   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.313506   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.313601   45696 main.go:141] libmachine: (test-preload-643468) DBG | Using SSH client type: external
	I0421 19:29:50.313630   45696 main.go:141] libmachine: (test-preload-643468) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa (-rw-------)
	I0421 19:29:50.313663   45696 main.go:141] libmachine: (test-preload-643468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:29:50.313676   45696 main.go:141] libmachine: (test-preload-643468) DBG | About to run SSH command:
	I0421 19:29:50.313688   45696 main.go:141] libmachine: (test-preload-643468) DBG | exit 0
	I0421 19:29:50.438278   45696 main.go:141] libmachine: (test-preload-643468) DBG | SSH cmd err, output: <nil>: 
	I0421 19:29:50.438603   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetConfigRaw
	I0421 19:29:50.439197   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetIP
	I0421 19:29:50.441832   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.442179   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.442212   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.442371   45696 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/config.json ...
	I0421 19:29:50.442558   45696 machine.go:94] provisionDockerMachine start ...
	I0421 19:29:50.442576   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:50.442791   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:50.444756   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.445048   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.445075   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.445171   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:50.445337   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:50.445493   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:50.445611   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:50.445793   45696 main.go:141] libmachine: Using SSH client type: native
	I0421 19:29:50.446008   45696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0421 19:29:50.446020   45696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:29:50.554886   45696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:29:50.554919   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetMachineName
	I0421 19:29:50.555201   45696 buildroot.go:166] provisioning hostname "test-preload-643468"
	I0421 19:29:50.555233   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetMachineName
	I0421 19:29:50.555403   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:50.558038   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.558419   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.558448   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.558582   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:50.558763   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:50.558931   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:50.559041   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:50.559219   45696 main.go:141] libmachine: Using SSH client type: native
	I0421 19:29:50.559406   45696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0421 19:29:50.559424   45696 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-643468 && echo "test-preload-643468" | sudo tee /etc/hostname
	I0421 19:29:50.688437   45696 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-643468
	
	I0421 19:29:50.688483   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:50.691158   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.691445   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.691469   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.691619   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:50.691837   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:50.692020   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:50.692175   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:50.692297   45696 main.go:141] libmachine: Using SSH client type: native
	I0421 19:29:50.692476   45696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0421 19:29:50.692501   45696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-643468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-643468/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-643468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:29:50.818141   45696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:29:50.818170   45696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:29:50.818201   45696 buildroot.go:174] setting up certificates
	I0421 19:29:50.818215   45696 provision.go:84] configureAuth start
	I0421 19:29:50.818233   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetMachineName
	I0421 19:29:50.818510   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetIP
	I0421 19:29:50.821136   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.821496   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.821542   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.821633   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:50.823924   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.824282   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.824317   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.824436   45696 provision.go:143] copyHostCerts
	I0421 19:29:50.824488   45696 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:29:50.824497   45696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:29:50.824560   45696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:29:50.824645   45696 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:29:50.824653   45696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:29:50.824676   45696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:29:50.824737   45696 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:29:50.824744   45696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:29:50.824764   45696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:29:50.824820   45696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.test-preload-643468 san=[127.0.0.1 192.168.39.171 localhost minikube test-preload-643468]
	I0421 19:29:50.947803   45696 provision.go:177] copyRemoteCerts
	I0421 19:29:50.947862   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:29:50.947883   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:50.950376   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.950681   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:50.950710   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:50.950910   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:50.951128   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:50.951296   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:50.951441   45696 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa Username:docker}
	I0421 19:29:51.037594   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:29:51.065204   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0421 19:29:51.091546   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:29:51.118848   45696 provision.go:87] duration metric: took 300.616454ms to configureAuth
	I0421 19:29:51.118876   45696 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:29:51.119029   45696 config.go:182] Loaded profile config "test-preload-643468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0421 19:29:51.119091   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:51.121428   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.121807   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:51.121838   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.121991   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:51.122209   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:51.122354   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:51.122486   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:51.122634   45696 main.go:141] libmachine: Using SSH client type: native
	I0421 19:29:51.122801   45696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0421 19:29:51.122815   45696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:29:51.394693   45696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:29:51.394715   45696 machine.go:97] duration metric: took 952.143528ms to provisionDockerMachine
	I0421 19:29:51.394741   45696 start.go:293] postStartSetup for "test-preload-643468" (driver="kvm2")
	I0421 19:29:51.394754   45696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:29:51.394779   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:51.395097   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:29:51.395125   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:51.397793   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.398167   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:51.398203   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.398311   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:51.398489   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:51.398653   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:51.398836   45696 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa Username:docker}
	I0421 19:29:51.482662   45696 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:29:51.487759   45696 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:29:51.487784   45696 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:29:51.487858   45696 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:29:51.487963   45696 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:29:51.488077   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:29:51.498721   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:29:51.525761   45696 start.go:296] duration metric: took 131.007781ms for postStartSetup
	I0421 19:29:51.525797   45696 fix.go:56] duration metric: took 21.809807004s for fixHost
	I0421 19:29:51.525819   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:51.528281   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.528590   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:51.528616   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.528755   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:51.528973   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:51.529169   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:51.529344   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:51.529528   45696 main.go:141] libmachine: Using SSH client type: native
	I0421 19:29:51.529704   45696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0421 19:29:51.529717   45696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:29:51.635083   45696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713727791.620656494
	
	I0421 19:29:51.635126   45696 fix.go:216] guest clock: 1713727791.620656494
	I0421 19:29:51.635137   45696 fix.go:229] Guest: 2024-04-21 19:29:51.620656494 +0000 UTC Remote: 2024-04-21 19:29:51.525802079 +0000 UTC m=+34.343465516 (delta=94.854415ms)
	I0421 19:29:51.635170   45696 fix.go:200] guest clock delta is within tolerance: 94.854415ms
	I0421 19:29:51.635189   45696 start.go:83] releasing machines lock for "test-preload-643468", held for 21.919212235s
	I0421 19:29:51.635214   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:51.635490   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetIP
	I0421 19:29:51.637939   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.638256   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:51.638286   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.638444   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:51.638912   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:51.639109   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:29:51.639218   45696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:29:51.639253   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:51.639360   45696 ssh_runner.go:195] Run: cat /version.json
	I0421 19:29:51.639387   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:29:51.641537   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.641839   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:51.641867   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.642107   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:51.642206   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.642303   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:51.642455   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:51.642522   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:51.642546   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:51.642575   45696 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa Username:docker}
	I0421 19:29:51.642880   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:29:51.643028   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:29:51.643173   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:29:51.643317   45696 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa Username:docker}
	I0421 19:29:51.752248   45696 ssh_runner.go:195] Run: systemctl --version
	I0421 19:29:51.759208   45696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:29:51.914138   45696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:29:51.922999   45696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:29:51.923073   45696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:29:51.942808   45696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:29:51.942835   45696 start.go:494] detecting cgroup driver to use...
	I0421 19:29:51.942891   45696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:29:51.960860   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:29:51.976470   45696 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:29:51.976528   45696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:29:51.991564   45696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:29:52.006152   45696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:29:52.141678   45696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:29:52.303888   45696 docker.go:233] disabling docker service ...
	I0421 19:29:52.303951   45696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:29:52.319231   45696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:29:52.333546   45696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:29:52.462372   45696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:29:52.589212   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:29:52.604546   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:29:52.626144   45696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0421 19:29:52.626212   45696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:29:52.637501   45696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:29:52.637554   45696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:29:52.648467   45696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:29:52.659212   45696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:29:52.669979   45696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:29:52.681188   45696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:29:52.691765   45696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:29:52.710707   45696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:29:52.721931   45696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:29:52.732221   45696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:29:52.732285   45696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:29:52.747261   45696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:29:52.757426   45696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:29:52.875731   45696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:29:53.028642   45696 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:29:53.028729   45696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:29:53.033842   45696 start.go:562] Will wait 60s for crictl version
	I0421 19:29:53.033897   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:53.037984   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:29:53.076023   45696 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:29:53.076131   45696 ssh_runner.go:195] Run: crio --version
	I0421 19:29:53.105957   45696 ssh_runner.go:195] Run: crio --version
	I0421 19:29:53.139565   45696 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0421 19:29:53.141183   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetIP
	I0421 19:29:53.143706   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:53.144076   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:29:53.144107   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:29:53.144293   45696 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0421 19:29:53.148879   45696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:29:53.162964   45696 kubeadm.go:877] updating cluster {Name:test-preload-643468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-643468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:29:53.163064   45696 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0421 19:29:53.163106   45696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:29:53.204322   45696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0421 19:29:53.204379   45696 ssh_runner.go:195] Run: which lz4
	I0421 19:29:53.208999   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 19:29:53.213577   45696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:29:53.213603   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0421 19:29:55.112872   45696 crio.go:462] duration metric: took 1.903908226s to copy over tarball
	I0421 19:29:55.113004   45696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 19:29:57.767257   45696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.654211976s)
	I0421 19:29:57.767286   45696 crio.go:469] duration metric: took 2.654384936s to extract the tarball
	I0421 19:29:57.767296   45696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 19:29:57.809677   45696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:29:57.854896   45696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0421 19:29:57.854924   45696 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0421 19:29:57.854996   45696 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:29:57.855023   45696 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0421 19:29:57.855069   45696 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0421 19:29:57.854999   45696 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0421 19:29:57.855034   45696 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0421 19:29:57.855049   45696 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0421 19:29:57.855127   45696 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0421 19:29:57.855010   45696 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0421 19:29:57.856358   45696 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:29:57.856480   45696 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0421 19:29:57.856492   45696 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0421 19:29:57.856498   45696 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0421 19:29:57.856521   45696 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0421 19:29:57.856546   45696 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0421 19:29:57.856558   45696 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0421 19:29:57.856486   45696 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0421 19:29:58.000203   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0421 19:29:58.006810   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0421 19:29:58.006985   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0421 19:29:58.007805   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0421 19:29:58.016180   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0421 19:29:58.087951   45696 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0421 19:29:58.088005   45696 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0421 19:29:58.088065   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:58.106297   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0421 19:29:58.119272   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0421 19:29:58.168287   45696 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0421 19:29:58.168320   45696 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0421 19:29:58.168328   45696 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0421 19:29:58.168343   45696 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0421 19:29:58.168347   45696 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0421 19:29:58.168376   45696 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0421 19:29:58.168387   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:58.168387   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:58.168406   45696 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0421 19:29:58.168427   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:58.168439   45696 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0421 19:29:58.168472   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:58.168512   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0421 19:29:58.210339   45696 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0421 19:29:58.210373   45696 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0421 19:29:58.210413   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:58.250000   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0421 19:29:58.250087   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0421 19:29:58.252209   45696 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0421 19:29:58.252253   45696 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0421 19:29:58.252279   45696 ssh_runner.go:195] Run: which crictl
	I0421 19:29:58.257732   45696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0421 19:29:58.257789   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0421 19:29:58.257821   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0421 19:29:58.257829   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0421 19:29:58.257907   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0421 19:29:58.379754   45696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0421 19:29:58.379845   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0421 19:29:58.382312   45696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0421 19:29:58.382375   45696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0421 19:29:58.382374   45696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0421 19:29:58.382422   45696 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0421 19:29:58.382432   45696 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0421 19:29:58.382446   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0421 19:29:58.382460   45696 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0421 19:29:58.382475   45696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0421 19:29:58.382492   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0421 19:29:58.382529   45696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0421 19:29:58.382555   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0421 19:29:58.382593   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0421 19:29:58.386894   45696 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0421 19:29:58.439045   45696 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0421 19:29:58.439147   45696 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0421 19:29:58.439180   45696 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0421 19:29:58.439197   45696 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0421 19:29:58.439435   45696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0421 19:29:58.439544   45696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0421 19:29:58.829747   45696 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:30:00.669992   45696 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.2875006s)
	I0421 19:30:00.670033   45696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0421 19:30:00.670071   45696 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (2.230488741s)
	I0421 19:30:00.670096   45696 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0421 19:30:00.670101   45696 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0421 19:30:00.670128   45696 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.840352394s)
	I0421 19:30:00.670150   45696 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0421 19:30:01.419689   45696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0421 19:30:01.419741   45696 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0421 19:30:01.419797   45696 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0421 19:30:02.271306   45696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0421 19:30:02.271355   45696 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0421 19:30:02.271420   45696 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0421 19:30:03.027147   45696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0421 19:30:03.027204   45696 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0421 19:30:03.027250   45696 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0421 19:30:03.472537   45696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0421 19:30:03.472589   45696 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0421 19:30:03.472642   45696 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0421 19:30:03.620333   45696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0421 19:30:03.620386   45696 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0421 19:30:03.620439   45696 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0421 19:30:05.883208   45696 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.262748263s)
	I0421 19:30:05.883258   45696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0421 19:30:05.883286   45696 cache_images.go:123] Successfully loaded all cached images
	I0421 19:30:05.883291   45696 cache_images.go:92] duration metric: took 8.028354487s to LoadCachedImages
	I0421 19:30:05.883299   45696 kubeadm.go:928] updating node { 192.168.39.171 8443 v1.24.4 crio true true} ...
	I0421 19:30:05.883398   45696 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-643468 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-643468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:30:05.883457   45696 ssh_runner.go:195] Run: crio config
	I0421 19:30:05.934677   45696 cni.go:84] Creating CNI manager for ""
	I0421 19:30:05.934699   45696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:30:05.934713   45696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:30:05.934729   45696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-643468 NodeName:test-preload-643468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 19:30:05.934856   45696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-643468"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:30:05.934926   45696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0421 19:30:05.947144   45696 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:30:05.947218   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 19:30:05.958642   45696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0421 19:30:05.978025   45696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:30:05.997627   45696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0421 19:30:06.018203   45696 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0421 19:30:06.022751   45696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:30:06.037567   45696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:30:06.162637   45696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:30:06.180867   45696 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468 for IP: 192.168.39.171
	I0421 19:30:06.180893   45696 certs.go:194] generating shared ca certs ...
	I0421 19:30:06.180912   45696 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:30:06.181069   45696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 19:30:06.181141   45696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 19:30:06.181157   45696 certs.go:256] generating profile certs ...
	I0421 19:30:06.181264   45696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/client.key
	I0421 19:30:06.181337   45696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/apiserver.key.ba7985a1
	I0421 19:30:06.181387   45696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/proxy-client.key
	I0421 19:30:06.181520   45696 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 19:30:06.181567   45696 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 19:30:06.181580   45696 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 19:30:06.181616   45696 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 19:30:06.181653   45696 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 19:30:06.181684   45696 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 19:30:06.181736   45696 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:30:06.182561   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:30:06.225614   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:30:06.263980   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:30:06.299479   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:30:06.327613   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0421 19:30:06.361970   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:30:06.395129   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:30:06.420243   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 19:30:06.445054   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:30:06.470422   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 19:30:06.495971   45696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 19:30:06.521581   45696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:30:06.540058   45696 ssh_runner.go:195] Run: openssl version
	I0421 19:30:06.546493   45696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:30:06.559153   45696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:30:06.564472   45696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:30:06.564541   45696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:30:06.571288   45696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:30:06.584501   45696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 19:30:06.597645   45696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 19:30:06.602990   45696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:30:06.603043   45696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 19:30:06.609297   45696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 19:30:06.622352   45696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 19:30:06.635166   45696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 19:30:06.640354   45696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:30:06.640419   45696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 19:30:06.646853   45696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:30:06.659674   45696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:30:06.664900   45696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 19:30:06.671668   45696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 19:30:06.678351   45696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 19:30:06.684893   45696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 19:30:06.691405   45696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 19:30:06.697628   45696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 19:30:06.703795   45696 kubeadm.go:391] StartCluster: {Name:test-preload-643468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
643468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:30:06.703878   45696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 19:30:06.703993   45696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:30:06.745418   45696 cri.go:89] found id: ""
	I0421 19:30:06.745516   45696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0421 19:30:06.757806   45696 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 19:30:06.757825   45696 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 19:30:06.757830   45696 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 19:30:06.757870   45696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 19:30:06.769471   45696 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 19:30:06.769886   45696 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-643468" does not appear in /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:30:06.769985   45696 kubeconfig.go:62] /home/jenkins/minikube-integration/18702-3854/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-643468" cluster setting kubeconfig missing "test-preload-643468" context setting]
	I0421 19:30:06.770385   45696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:30:06.770916   45696 kapi.go:59] client config for test-preload-643468: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 19:30:06.771496   45696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 19:30:06.782518   45696 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.171
	I0421 19:30:06.782542   45696 kubeadm.go:1154] stopping kube-system containers ...
	I0421 19:30:06.782551   45696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0421 19:30:06.782605   45696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:30:06.833607   45696 cri.go:89] found id: ""
	I0421 19:30:06.833708   45696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 19:30:06.853472   45696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:30:06.865120   45696 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:30:06.865136   45696 kubeadm.go:156] found existing configuration files:
	
	I0421 19:30:06.865180   45696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:30:06.875942   45696 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:30:06.875988   45696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:30:06.886868   45696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:30:06.897446   45696 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:30:06.897491   45696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:30:06.908651   45696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:30:06.919558   45696 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:30:06.919590   45696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:30:06.931172   45696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:30:06.942282   45696 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:30:06.942350   45696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:30:06.953615   45696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:30:06.964946   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:30:07.060609   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:30:07.617108   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:30:07.929289   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:30:08.008066   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:30:08.103861   45696 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:30:08.103965   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:30:08.604772   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:30:09.104242   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:30:09.604444   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:30:09.628610   45696 api_server.go:72] duration metric: took 1.524748383s to wait for apiserver process to appear ...
	I0421 19:30:09.628648   45696 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:30:09.628667   45696 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0421 19:30:13.223726   45696 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 19:30:13.223758   45696 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 19:30:13.223771   45696 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0421 19:30:13.286846   45696 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 19:30:13.286878   45696 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 19:30:13.629418   45696 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0421 19:30:13.635423   45696 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 19:30:13.635453   45696 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 19:30:14.129004   45696 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0421 19:30:14.137586   45696 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 19:30:14.137611   45696 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 19:30:14.629172   45696 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0421 19:30:14.634802   45696 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0421 19:30:14.641089   45696 api_server.go:141] control plane version: v1.24.4
	I0421 19:30:14.641107   45696 api_server.go:131] duration metric: took 5.012453707s to wait for apiserver health ...
	I0421 19:30:14.641115   45696 cni.go:84] Creating CNI manager for ""
	I0421 19:30:14.641121   45696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:30:14.643052   45696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 19:30:14.644350   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 19:30:14.657892   45696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 19:30:14.679055   45696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:30:14.695326   45696 system_pods.go:59] 7 kube-system pods found
	I0421 19:30:14.695363   45696 system_pods.go:61] "coredns-6d4b75cb6d-x5q6z" [20039141-6e65-4d45-9921-76b6900b3068] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0421 19:30:14.695373   45696 system_pods.go:61] "etcd-test-preload-643468" [04421987-72de-494e-8c8e-bf9c3555e311] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0421 19:30:14.695383   45696 system_pods.go:61] "kube-apiserver-test-preload-643468" [9817455a-2af1-4479-82d7-29a0f964950d] Running
	I0421 19:30:14.695389   45696 system_pods.go:61] "kube-controller-manager-test-preload-643468" [9dccc2f1-052a-4f1a-8d0f-5c30b81a28a0] Running
	I0421 19:30:14.695394   45696 system_pods.go:61] "kube-proxy-qtrrk" [be6ec3d8-e7b8-44a1-9020-c02b5e49b338] Running
	I0421 19:30:14.695400   45696 system_pods.go:61] "kube-scheduler-test-preload-643468" [68273115-9fa3-4e7f-934d-8d34e9864be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0421 19:30:14.695411   45696 system_pods.go:61] "storage-provisioner" [a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0421 19:30:14.695420   45696 system_pods.go:74] duration metric: took 16.347187ms to wait for pod list to return data ...
	I0421 19:30:14.695433   45696 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:30:14.700071   45696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:30:14.700102   45696 node_conditions.go:123] node cpu capacity is 2
	I0421 19:30:14.700115   45696 node_conditions.go:105] duration metric: took 4.676296ms to run NodePressure ...
	I0421 19:30:14.700140   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:30:14.935876   45696 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 19:30:14.940027   45696 kubeadm.go:733] kubelet initialised
	I0421 19:30:14.940045   45696 kubeadm.go:734] duration metric: took 4.146978ms waiting for restarted kubelet to initialise ...
	I0421 19:30:14.940052   45696 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:30:14.944544   45696 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-x5q6z" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:14.949373   45696 pod_ready.go:97] node "test-preload-643468" hosting pod "coredns-6d4b75cb6d-x5q6z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:14.949400   45696 pod_ready.go:81] duration metric: took 4.832839ms for pod "coredns-6d4b75cb6d-x5q6z" in "kube-system" namespace to be "Ready" ...
	E0421 19:30:14.949409   45696 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-643468" hosting pod "coredns-6d4b75cb6d-x5q6z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:14.949418   45696 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:14.953989   45696 pod_ready.go:97] node "test-preload-643468" hosting pod "etcd-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:14.954012   45696 pod_ready.go:81] duration metric: took 4.587307ms for pod "etcd-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	E0421 19:30:14.954023   45696 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-643468" hosting pod "etcd-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:14.954031   45696 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:14.959542   45696 pod_ready.go:97] node "test-preload-643468" hosting pod "kube-apiserver-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:14.959569   45696 pod_ready.go:81] duration metric: took 5.527135ms for pod "kube-apiserver-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	E0421 19:30:14.959580   45696 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-643468" hosting pod "kube-apiserver-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:14.959588   45696 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:15.085794   45696 pod_ready.go:97] node "test-preload-643468" hosting pod "kube-controller-manager-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:15.085824   45696 pod_ready.go:81] duration metric: took 126.215461ms for pod "kube-controller-manager-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	E0421 19:30:15.085836   45696 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-643468" hosting pod "kube-controller-manager-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:15.085842   45696 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qtrrk" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:15.483907   45696 pod_ready.go:97] node "test-preload-643468" hosting pod "kube-proxy-qtrrk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:15.483940   45696 pod_ready.go:81] duration metric: took 398.087072ms for pod "kube-proxy-qtrrk" in "kube-system" namespace to be "Ready" ...
	E0421 19:30:15.483953   45696 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-643468" hosting pod "kube-proxy-qtrrk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:15.483962   45696 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:15.882966   45696 pod_ready.go:97] node "test-preload-643468" hosting pod "kube-scheduler-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:15.882997   45696 pod_ready.go:81] duration metric: took 399.026606ms for pod "kube-scheduler-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	E0421 19:30:15.883006   45696 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-643468" hosting pod "kube-scheduler-test-preload-643468" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:15.883013   45696 pod_ready.go:38] duration metric: took 942.954116ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:30:15.883029   45696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:30:15.896829   45696 ops.go:34] apiserver oom_adj: -16
	I0421 19:30:15.896852   45696 kubeadm.go:591] duration metric: took 9.139016469s to restartPrimaryControlPlane
	I0421 19:30:15.896863   45696 kubeadm.go:393] duration metric: took 9.193070741s to StartCluster
	I0421 19:30:15.896877   45696 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:30:15.896950   45696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:30:15.897587   45696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:30:15.897853   45696 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:30:15.900490   45696 out.go:177] * Verifying Kubernetes components...
	I0421 19:30:15.897904   45696 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:30:15.898093   45696 config.go:182] Loaded profile config "test-preload-643468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0421 19:30:15.901854   45696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:30:15.901869   45696 addons.go:69] Setting default-storageclass=true in profile "test-preload-643468"
	I0421 19:30:15.901891   45696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-643468"
	I0421 19:30:15.901859   45696 addons.go:69] Setting storage-provisioner=true in profile "test-preload-643468"
	I0421 19:30:15.901946   45696 addons.go:234] Setting addon storage-provisioner=true in "test-preload-643468"
	W0421 19:30:15.901958   45696 addons.go:243] addon storage-provisioner should already be in state true
	I0421 19:30:15.901991   45696 host.go:66] Checking if "test-preload-643468" exists ...
	I0421 19:30:15.902297   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:30:15.902337   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:30:15.902448   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:30:15.902506   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:30:15.917143   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I0421 19:30:15.917209   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0421 19:30:15.917619   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:30:15.917851   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:30:15.918209   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:30:15.918232   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:30:15.918405   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:30:15.918433   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:30:15.918518   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:30:15.918682   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetState
	I0421 19:30:15.918757   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:30:15.919328   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:30:15.919382   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:30:15.920936   45696 kapi.go:59] client config for test-preload-643468: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/test-preload-643468/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 19:30:15.921177   45696 addons.go:234] Setting addon default-storageclass=true in "test-preload-643468"
	W0421 19:30:15.921197   45696 addons.go:243] addon default-storageclass should already be in state true
	I0421 19:30:15.921223   45696 host.go:66] Checking if "test-preload-643468" exists ...
	I0421 19:30:15.921576   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:30:15.921625   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:30:15.933506   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0421 19:30:15.933875   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:30:15.934356   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:30:15.934387   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:30:15.934689   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:30:15.934879   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetState
	I0421 19:30:15.935642   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I0421 19:30:15.936092   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:30:15.936599   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:30:15.936626   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:30:15.936638   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:30:15.938783   45696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:30:15.936978   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:30:15.940276   45696 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:30:15.940297   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:30:15.940314   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:30:15.940736   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:30:15.940780   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:30:15.943052   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:30:15.943558   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:30:15.943594   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:30:15.943726   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:30:15.943927   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:30:15.944110   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:30:15.944253   45696 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa Username:docker}
	I0421 19:30:15.955137   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I0421 19:30:15.955515   45696 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:30:15.955955   45696 main.go:141] libmachine: Using API Version  1
	I0421 19:30:15.955982   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:30:15.956284   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:30:15.956486   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetState
	I0421 19:30:15.957992   45696 main.go:141] libmachine: (test-preload-643468) Calling .DriverName
	I0421 19:30:15.958259   45696 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:30:15.958279   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:30:15.958298   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHHostname
	I0421 19:30:15.961019   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:30:15.961415   45696 main.go:141] libmachine: (test-preload-643468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:73:94", ip: ""} in network mk-test-preload-643468: {Iface:virbr1 ExpiryTime:2024-04-21 20:25:46 +0000 UTC Type:0 Mac:52:54:00:02:73:94 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-643468 Clientid:01:52:54:00:02:73:94}
	I0421 19:30:15.961443   45696 main.go:141] libmachine: (test-preload-643468) DBG | domain test-preload-643468 has defined IP address 192.168.39.171 and MAC address 52:54:00:02:73:94 in network mk-test-preload-643468
	I0421 19:30:15.961561   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHPort
	I0421 19:30:15.961740   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHKeyPath
	I0421 19:30:15.961922   45696 main.go:141] libmachine: (test-preload-643468) Calling .GetSSHUsername
	I0421 19:30:15.962088   45696 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/test-preload-643468/id_rsa Username:docker}
	I0421 19:30:16.076909   45696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:30:16.096965   45696 node_ready.go:35] waiting up to 6m0s for node "test-preload-643468" to be "Ready" ...
	I0421 19:30:16.171075   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:30:16.211403   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:30:17.110954   45696 main.go:141] libmachine: Making call to close driver server
	I0421 19:30:17.110981   45696 main.go:141] libmachine: (test-preload-643468) Calling .Close
	I0421 19:30:17.110997   45696 main.go:141] libmachine: Making call to close driver server
	I0421 19:30:17.111015   45696 main.go:141] libmachine: (test-preload-643468) Calling .Close
	I0421 19:30:17.111264   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:30:17.111283   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:30:17.111291   45696 main.go:141] libmachine: Making call to close driver server
	I0421 19:30:17.111298   45696 main.go:141] libmachine: (test-preload-643468) Calling .Close
	I0421 19:30:17.111314   45696 main.go:141] libmachine: (test-preload-643468) DBG | Closing plugin on server side
	I0421 19:30:17.111264   45696 main.go:141] libmachine: (test-preload-643468) DBG | Closing plugin on server side
	I0421 19:30:17.111302   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:30:17.111431   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:30:17.111450   45696 main.go:141] libmachine: Making call to close driver server
	I0421 19:30:17.111460   45696 main.go:141] libmachine: (test-preload-643468) Calling .Close
	I0421 19:30:17.111572   45696 main.go:141] libmachine: (test-preload-643468) DBG | Closing plugin on server side
	I0421 19:30:17.111580   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:30:17.111623   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:30:17.111645   45696 main.go:141] libmachine: (test-preload-643468) DBG | Closing plugin on server side
	I0421 19:30:17.111681   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:30:17.111690   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:30:17.116630   45696 main.go:141] libmachine: Making call to close driver server
	I0421 19:30:17.116647   45696 main.go:141] libmachine: (test-preload-643468) Calling .Close
	I0421 19:30:17.116874   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:30:17.116886   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:30:17.119476   45696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0421 19:30:17.120775   45696 addons.go:505] duration metric: took 1.222883421s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0421 19:30:18.103030   45696 node_ready.go:53] node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:20.600976   45696 node_ready.go:53] node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:22.601802   45696 node_ready.go:53] node "test-preload-643468" has status "Ready":"False"
	I0421 19:30:23.600176   45696 node_ready.go:49] node "test-preload-643468" has status "Ready":"True"
	I0421 19:30:23.600199   45696 node_ready.go:38] duration metric: took 7.503192096s for node "test-preload-643468" to be "Ready" ...
	I0421 19:30:23.600206   45696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:30:23.605008   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-x5q6z" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:23.610039   45696 pod_ready.go:92] pod "coredns-6d4b75cb6d-x5q6z" in "kube-system" namespace has status "Ready":"True"
	I0421 19:30:23.610070   45696 pod_ready.go:81] duration metric: took 5.03975ms for pod "coredns-6d4b75cb6d-x5q6z" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:23.610081   45696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.617157   45696 pod_ready.go:92] pod "etcd-test-preload-643468" in "kube-system" namespace has status "Ready":"True"
	I0421 19:30:24.617179   45696 pod_ready.go:81] duration metric: took 1.007090615s for pod "etcd-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.617188   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.621515   45696 pod_ready.go:92] pod "kube-apiserver-test-preload-643468" in "kube-system" namespace has status "Ready":"True"
	I0421 19:30:24.621532   45696 pod_ready.go:81] duration metric: took 4.337766ms for pod "kube-apiserver-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.621540   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.626102   45696 pod_ready.go:92] pod "kube-controller-manager-test-preload-643468" in "kube-system" namespace has status "Ready":"True"
	I0421 19:30:24.626116   45696 pod_ready.go:81] duration metric: took 4.571436ms for pod "kube-controller-manager-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.626124   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qtrrk" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.801031   45696 pod_ready.go:92] pod "kube-proxy-qtrrk" in "kube-system" namespace has status "Ready":"True"
	I0421 19:30:24.801054   45696 pod_ready.go:81] duration metric: took 174.924406ms for pod "kube-proxy-qtrrk" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:24.801063   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:26.808579   45696 pod_ready.go:102] pod "kube-scheduler-test-preload-643468" in "kube-system" namespace has status "Ready":"False"
	I0421 19:30:27.308919   45696 pod_ready.go:92] pod "kube-scheduler-test-preload-643468" in "kube-system" namespace has status "Ready":"True"
	I0421 19:30:27.308939   45696 pod_ready.go:81] duration metric: took 2.50786859s for pod "kube-scheduler-test-preload-643468" in "kube-system" namespace to be "Ready" ...
	I0421 19:30:27.308952   45696 pod_ready.go:38] duration metric: took 3.708736345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:30:27.308977   45696 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:30:27.309028   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:30:27.327012   45696 api_server.go:72] duration metric: took 11.429121111s to wait for apiserver process to appear ...
	I0421 19:30:27.327034   45696 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:30:27.327054   45696 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0421 19:30:27.335784   45696 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0421 19:30:27.336825   45696 api_server.go:141] control plane version: v1.24.4
	I0421 19:30:27.336853   45696 api_server.go:131] duration metric: took 9.812305ms to wait for apiserver health ...
	I0421 19:30:27.336862   45696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:30:27.404778   45696 system_pods.go:59] 7 kube-system pods found
	I0421 19:30:27.404800   45696 system_pods.go:61] "coredns-6d4b75cb6d-x5q6z" [20039141-6e65-4d45-9921-76b6900b3068] Running
	I0421 19:30:27.404804   45696 system_pods.go:61] "etcd-test-preload-643468" [04421987-72de-494e-8c8e-bf9c3555e311] Running
	I0421 19:30:27.404807   45696 system_pods.go:61] "kube-apiserver-test-preload-643468" [9817455a-2af1-4479-82d7-29a0f964950d] Running
	I0421 19:30:27.404810   45696 system_pods.go:61] "kube-controller-manager-test-preload-643468" [9dccc2f1-052a-4f1a-8d0f-5c30b81a28a0] Running
	I0421 19:30:27.404813   45696 system_pods.go:61] "kube-proxy-qtrrk" [be6ec3d8-e7b8-44a1-9020-c02b5e49b338] Running
	I0421 19:30:27.404816   45696 system_pods.go:61] "kube-scheduler-test-preload-643468" [68273115-9fa3-4e7f-934d-8d34e9864be2] Running
	I0421 19:30:27.404819   45696 system_pods.go:61] "storage-provisioner" [a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58] Running
	I0421 19:30:27.404823   45696 system_pods.go:74] duration metric: took 67.955743ms to wait for pod list to return data ...
	I0421 19:30:27.404830   45696 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:30:27.601108   45696 default_sa.go:45] found service account: "default"
	I0421 19:30:27.601143   45696 default_sa.go:55] duration metric: took 196.306742ms for default service account to be created ...
	I0421 19:30:27.601155   45696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:30:27.803015   45696 system_pods.go:86] 7 kube-system pods found
	I0421 19:30:27.803053   45696 system_pods.go:89] "coredns-6d4b75cb6d-x5q6z" [20039141-6e65-4d45-9921-76b6900b3068] Running
	I0421 19:30:27.803060   45696 system_pods.go:89] "etcd-test-preload-643468" [04421987-72de-494e-8c8e-bf9c3555e311] Running
	I0421 19:30:27.803067   45696 system_pods.go:89] "kube-apiserver-test-preload-643468" [9817455a-2af1-4479-82d7-29a0f964950d] Running
	I0421 19:30:27.803073   45696 system_pods.go:89] "kube-controller-manager-test-preload-643468" [9dccc2f1-052a-4f1a-8d0f-5c30b81a28a0] Running
	I0421 19:30:27.803079   45696 system_pods.go:89] "kube-proxy-qtrrk" [be6ec3d8-e7b8-44a1-9020-c02b5e49b338] Running
	I0421 19:30:27.803085   45696 system_pods.go:89] "kube-scheduler-test-preload-643468" [68273115-9fa3-4e7f-934d-8d34e9864be2] Running
	I0421 19:30:27.803090   45696 system_pods.go:89] "storage-provisioner" [a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58] Running
	I0421 19:30:27.803099   45696 system_pods.go:126] duration metric: took 201.937477ms to wait for k8s-apps to be running ...
	I0421 19:30:27.803109   45696 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:30:27.803173   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:30:27.818969   45696 system_svc.go:56] duration metric: took 15.853383ms WaitForService to wait for kubelet
	I0421 19:30:27.819001   45696 kubeadm.go:576] duration metric: took 11.921110598s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:30:27.819023   45696 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:30:28.000892   45696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:30:28.000920   45696 node_conditions.go:123] node cpu capacity is 2
	I0421 19:30:28.000930   45696 node_conditions.go:105] duration metric: took 181.90215ms to run NodePressure ...
	I0421 19:30:28.000941   45696 start.go:240] waiting for startup goroutines ...
	I0421 19:30:28.000948   45696 start.go:245] waiting for cluster config update ...
	I0421 19:30:28.000957   45696 start.go:254] writing updated cluster config ...
	I0421 19:30:28.001196   45696 ssh_runner.go:195] Run: rm -f paused
	I0421 19:30:28.049069   45696 start.go:600] kubectl: 1.30.0, cluster: 1.24.4 (minor skew: 6)
	I0421 19:30:28.051277   45696 out.go:177] 
	W0421 19:30:28.052858   45696 out.go:239] ! /usr/local/bin/kubectl is version 1.30.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0421 19:30:28.054197   45696 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0421 19:30:28.055422   45696 out.go:177] * Done! kubectl is now configured to use "test-preload-643468" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.011737149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727829011715439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f962e6a9-5a79-4d15-9860-e59391bb787d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.012362332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2103de9-bafe-4659-b021-b135d5657c12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.012425063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2103de9-bafe-4659-b021-b135d5657c12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.012575825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1f55c3d25dc458ab5de6ec611ee6db996897f18bf1422df41c027b4bdeae0f6,PodSandboxId:8ab18a828c7f73563c70dd78eb66190e1708b6fb2935ff600e72334387636d6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713727822347505421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5q6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20039141-6e65-4d45-9921-76b6900b3068,},Annotations:map[string]string{io.kubernetes.container.hash: 56913022,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d98d326c0657a32fa4d3428a43d1fb2fa14e52c0a923b7b11608854d14cafb4,PodSandboxId:dbb1e8023c33e2bd32bf1135a96bbe5c2fd1b01b59f9e41d0058bdf188764955,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727815146738923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0fdb4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9838510f43594b237818ab56a6d348e835939e193789727bf81fb2286d8c81,PodSandboxId:d492a61f217925e12e7ddc466b5ec0344259052963a961f75e75a7d020869da3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713727815111335537,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtrrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6ec3d8-e7b8-44a1-9020-c02b5e49b338,},Annotations:map[string]string{io.kubernetes.container.hash: 8253617d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ff471e5cc044c04fc1190ef9b0b3576d557fcfd8d923117eb6d280655c875a,PodSandboxId:64287e03a7ac018a6e6c8f306fcb384fe5d62b887138b5d331090d10a5e91dba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713727808926257784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953d03de2
7dd690481e244a851920d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a319820fe8e02482e544b69d2f20dbfa471d86973bc3cc842b8604e76445420d,PodSandboxId:06c7a18522173286baba65888c99f5d4aec7d32b07b25f89c2ceceacff522d95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713727808948626879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd56185c4160737a53a7
4dc8b21d134a,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cf4a374a944ec0dff98c21b1905d0cf7bb870e2e1b693dbb28d658d60afe4,PodSandboxId:01bb92a07de30a46a106f51869b82ccfcd4dac5aba67f5355ae0c529b6ed5d2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713727808824514399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99b
c0cb66bbc1b60a564d557b5ace1f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc96bce0462535d48b7252b98578e5f72664a42f1eb12f3ebe8d399135bac3a,PodSandboxId:17e05fae2b6861a8d08994ae020179677e6810638272d5e1f2479ce571c94f9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713727808827456113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8441cf3ba7fac6b1ce7c1bdf1ba37f9a,},Annotation
s:map[string]string{io.kubernetes.container.hash: cdeddcb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2103de9-bafe-4659-b021-b135d5657c12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.056131510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9d3a9cf-83ad-431c-9871-3477ffe95170 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.056200217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9d3a9cf-83ad-431c-9871-3477ffe95170 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.057310665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86443227-6523-4f4f-8c5d-f506a4f90160 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.057725938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727829057706792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86443227-6523-4f4f-8c5d-f506a4f90160 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.058618424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8bc99a1-d475-464b-97a8-9b41b10c26a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.058669376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8bc99a1-d475-464b-97a8-9b41b10c26a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.058879709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1f55c3d25dc458ab5de6ec611ee6db996897f18bf1422df41c027b4bdeae0f6,PodSandboxId:8ab18a828c7f73563c70dd78eb66190e1708b6fb2935ff600e72334387636d6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713727822347505421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5q6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20039141-6e65-4d45-9921-76b6900b3068,},Annotations:map[string]string{io.kubernetes.container.hash: 56913022,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d98d326c0657a32fa4d3428a43d1fb2fa14e52c0a923b7b11608854d14cafb4,PodSandboxId:dbb1e8023c33e2bd32bf1135a96bbe5c2fd1b01b59f9e41d0058bdf188764955,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727815146738923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0fdb4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9838510f43594b237818ab56a6d348e835939e193789727bf81fb2286d8c81,PodSandboxId:d492a61f217925e12e7ddc466b5ec0344259052963a961f75e75a7d020869da3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713727815111335537,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtrrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6ec3d8-e7b8-44a1-9020-c02b5e49b338,},Annotations:map[string]string{io.kubernetes.container.hash: 8253617d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ff471e5cc044c04fc1190ef9b0b3576d557fcfd8d923117eb6d280655c875a,PodSandboxId:64287e03a7ac018a6e6c8f306fcb384fe5d62b887138b5d331090d10a5e91dba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713727808926257784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953d03de2
7dd690481e244a851920d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a319820fe8e02482e544b69d2f20dbfa471d86973bc3cc842b8604e76445420d,PodSandboxId:06c7a18522173286baba65888c99f5d4aec7d32b07b25f89c2ceceacff522d95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713727808948626879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd56185c4160737a53a7
4dc8b21d134a,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cf4a374a944ec0dff98c21b1905d0cf7bb870e2e1b693dbb28d658d60afe4,PodSandboxId:01bb92a07de30a46a106f51869b82ccfcd4dac5aba67f5355ae0c529b6ed5d2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713727808824514399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99b
c0cb66bbc1b60a564d557b5ace1f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc96bce0462535d48b7252b98578e5f72664a42f1eb12f3ebe8d399135bac3a,PodSandboxId:17e05fae2b6861a8d08994ae020179677e6810638272d5e1f2479ce571c94f9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713727808827456113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8441cf3ba7fac6b1ce7c1bdf1ba37f9a,},Annotation
s:map[string]string{io.kubernetes.container.hash: cdeddcb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8bc99a1-d475-464b-97a8-9b41b10c26a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.102514651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56b220b2-4562-4611-9395-57a686abf29d name=/runtime.v1.RuntimeService/Version
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.102584181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56b220b2-4562-4611-9395-57a686abf29d name=/runtime.v1.RuntimeService/Version
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.104092834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2821c3d5-5a07-4601-9bd8-bfa7ef85fe27 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.104505206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727829104486095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2821c3d5-5a07-4601-9bd8-bfa7ef85fe27 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.105258144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e51f8931-7314-4584-9afd-63cadec194e0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.105311165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e51f8931-7314-4584-9afd-63cadec194e0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.105484190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1f55c3d25dc458ab5de6ec611ee6db996897f18bf1422df41c027b4bdeae0f6,PodSandboxId:8ab18a828c7f73563c70dd78eb66190e1708b6fb2935ff600e72334387636d6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713727822347505421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5q6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20039141-6e65-4d45-9921-76b6900b3068,},Annotations:map[string]string{io.kubernetes.container.hash: 56913022,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d98d326c0657a32fa4d3428a43d1fb2fa14e52c0a923b7b11608854d14cafb4,PodSandboxId:dbb1e8023c33e2bd32bf1135a96bbe5c2fd1b01b59f9e41d0058bdf188764955,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727815146738923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0fdb4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9838510f43594b237818ab56a6d348e835939e193789727bf81fb2286d8c81,PodSandboxId:d492a61f217925e12e7ddc466b5ec0344259052963a961f75e75a7d020869da3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713727815111335537,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtrrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6ec3d8-e7b8-44a1-9020-c02b5e49b338,},Annotations:map[string]string{io.kubernetes.container.hash: 8253617d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ff471e5cc044c04fc1190ef9b0b3576d557fcfd8d923117eb6d280655c875a,PodSandboxId:64287e03a7ac018a6e6c8f306fcb384fe5d62b887138b5d331090d10a5e91dba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713727808926257784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953d03de2
7dd690481e244a851920d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a319820fe8e02482e544b69d2f20dbfa471d86973bc3cc842b8604e76445420d,PodSandboxId:06c7a18522173286baba65888c99f5d4aec7d32b07b25f89c2ceceacff522d95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713727808948626879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd56185c4160737a53a7
4dc8b21d134a,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cf4a374a944ec0dff98c21b1905d0cf7bb870e2e1b693dbb28d658d60afe4,PodSandboxId:01bb92a07de30a46a106f51869b82ccfcd4dac5aba67f5355ae0c529b6ed5d2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713727808824514399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99b
c0cb66bbc1b60a564d557b5ace1f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc96bce0462535d48b7252b98578e5f72664a42f1eb12f3ebe8d399135bac3a,PodSandboxId:17e05fae2b6861a8d08994ae020179677e6810638272d5e1f2479ce571c94f9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713727808827456113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8441cf3ba7fac6b1ce7c1bdf1ba37f9a,},Annotation
s:map[string]string{io.kubernetes.container.hash: cdeddcb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e51f8931-7314-4584-9afd-63cadec194e0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.140769367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d302e717-f33a-4230-8e2c-c6aef5874719 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.141161621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d302e717-f33a-4230-8e2c-c6aef5874719 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.142875392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc78ea0b-4290-4fcd-a078-dc9dde0b0f96 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.143326434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713727829143305707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc78ea0b-4290-4fcd-a078-dc9dde0b0f96 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.144269026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55585ccc-e0ed-479c-9e7e-a2dc853cea44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.144320696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55585ccc-e0ed-479c-9e7e-a2dc853cea44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:30:29 test-preload-643468 crio[677]: time="2024-04-21 19:30:29.144482130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1f55c3d25dc458ab5de6ec611ee6db996897f18bf1422df41c027b4bdeae0f6,PodSandboxId:8ab18a828c7f73563c70dd78eb66190e1708b6fb2935ff600e72334387636d6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713727822347505421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5q6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20039141-6e65-4d45-9921-76b6900b3068,},Annotations:map[string]string{io.kubernetes.container.hash: 56913022,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d98d326c0657a32fa4d3428a43d1fb2fa14e52c0a923b7b11608854d14cafb4,PodSandboxId:dbb1e8023c33e2bd32bf1135a96bbe5c2fd1b01b59f9e41d0058bdf188764955,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713727815146738923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0fdb4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9838510f43594b237818ab56a6d348e835939e193789727bf81fb2286d8c81,PodSandboxId:d492a61f217925e12e7ddc466b5ec0344259052963a961f75e75a7d020869da3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713727815111335537,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtrrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6ec3d8-e7b8-44a1-9020-c02b5e49b338,},Annotations:map[string]string{io.kubernetes.container.hash: 8253617d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ff471e5cc044c04fc1190ef9b0b3576d557fcfd8d923117eb6d280655c875a,PodSandboxId:64287e03a7ac018a6e6c8f306fcb384fe5d62b887138b5d331090d10a5e91dba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713727808926257784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953d03de2
7dd690481e244a851920d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a319820fe8e02482e544b69d2f20dbfa471d86973bc3cc842b8604e76445420d,PodSandboxId:06c7a18522173286baba65888c99f5d4aec7d32b07b25f89c2ceceacff522d95,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713727808948626879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd56185c4160737a53a7
4dc8b21d134a,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8cf4a374a944ec0dff98c21b1905d0cf7bb870e2e1b693dbb28d658d60afe4,PodSandboxId:01bb92a07de30a46a106f51869b82ccfcd4dac5aba67f5355ae0c529b6ed5d2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713727808824514399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99b
c0cb66bbc1b60a564d557b5ace1f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc96bce0462535d48b7252b98578e5f72664a42f1eb12f3ebe8d399135bac3a,PodSandboxId:17e05fae2b6861a8d08994ae020179677e6810638272d5e1f2479ce571c94f9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713727808827456113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-643468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8441cf3ba7fac6b1ce7c1bdf1ba37f9a,},Annotation
s:map[string]string{io.kubernetes.container.hash: cdeddcb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55585ccc-e0ed-479c-9e7e-a2dc853cea44 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f1f55c3d25dc4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   8ab18a828c7f7       coredns-6d4b75cb6d-x5q6z
	7d98d326c0657       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   dbb1e8023c33e       storage-provisioner
	2e9838510f435       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   d492a61f21792       kube-proxy-qtrrk
	a319820fe8e02       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   06c7a18522173       kube-apiserver-test-preload-643468
	91ff471e5cc04       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   64287e03a7ac0       kube-scheduler-test-preload-643468
	8dc96bce04625       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   17e05fae2b686       etcd-test-preload-643468
	5c8cf4a374a94       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   01bb92a07de30       kube-controller-manager-test-preload-643468
	
	
	==> coredns [f1f55c3d25dc458ab5de6ec611ee6db996897f18bf1422df41c027b4bdeae0f6] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:50282 - 57659 "HINFO IN 419222556693047332.6987960253318789688. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017669786s
	
	
	==> describe nodes <==
	Name:               test-preload-643468
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-643468
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=test-preload-643468
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_28_02_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:27:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-643468
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:30:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:30:23 +0000   Sun, 21 Apr 2024 19:27:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:30:23 +0000   Sun, 21 Apr 2024 19:27:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:30:23 +0000   Sun, 21 Apr 2024 19:27:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:30:23 +0000   Sun, 21 Apr 2024 19:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    test-preload-643468
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 507c5dd3af514419b2397e097a5894cf
	  System UUID:                507c5dd3-af51-4419-b239-7e097a5894cf
	  Boot ID:                    0085bbbb-8d3d-4334-aba0-e5f4f2042d31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-x5q6z                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m14s
	  kube-system                 etcd-test-preload-643468                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-apiserver-test-preload-643468             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-controller-manager-test-preload-643468    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-proxy-qtrrk                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-test-preload-643468             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 2m12s              kube-proxy       
	  Normal  Starting                 2m27s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s              kubelet          Node test-preload-643468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s              kubelet          Node test-preload-643468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s              kubelet          Node test-preload-643468 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m17s              kubelet          Node test-preload-643468 status is now: NodeReady
	  Normal  RegisteredNode           2m15s              node-controller  Node test-preload-643468 event: Registered Node test-preload-643468 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-643468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-643468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-643468 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-643468 event: Registered Node test-preload-643468 in Controller
	
	
	==> dmesg <==
	[Apr21 19:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051921] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043577] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.684004] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.558049] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.716730] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.723092] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.062389] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064768] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.200220] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.124241] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.286034] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[Apr21 19:30] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.058254] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.689004] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +4.856484] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.269079] systemd-fstab-generator[1711]: Ignoring "noauto" option for root device
	[  +6.157008] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [8dc96bce0462535d48b7252b98578e5f72664a42f1eb12f3ebe8d399135bac3a] <==
	{"level":"info","ts":"2024-04-21T19:30:09.179Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"4e6b9cdcc1ed933f","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-21T19:30:09.179Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-21T19:30:09.183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f switched to configuration voters=(5650782629426729791)"}
	{"level":"info","ts":"2024-04-21T19:30:09.183Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","added-peer-id":"4e6b9cdcc1ed933f","added-peer-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2024-04-21T19:30:09.183Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:30:09.183Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:30:09.188Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-21T19:30:09.188Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4e6b9cdcc1ed933f","initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-21T19:30:09.188Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T19:30:09.189Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-04-21T19:30:09.189Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-04-21T19:30:10.752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-21T19:30:10.752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-21T19:30:10.752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 2"}
	{"level":"info","ts":"2024-04-21T19:30:10.752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 3"}
	{"level":"info","ts":"2024-04-21T19:30:10.752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-04-21T19:30:10.752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 3"}
	{"level":"info","ts":"2024-04-21T19:30:10.752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-04-21T19:30:10.753Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:test-preload-643468 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:30:10.753Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:30:10.754Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:30:10.755Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2024-04-21T19:30:10.755Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T19:30:10.755Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:30:10.755Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:30:29 up 0 min,  0 users,  load average: 1.42, 0.42, 0.15
	Linux test-preload-643468 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a319820fe8e02482e544b69d2f20dbfa471d86973bc3cc842b8604e76445420d] <==
	I0421 19:30:13.253228       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0421 19:30:13.253261       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0421 19:30:13.208583       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0421 19:30:13.210285       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0421 19:30:13.266253       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0421 19:30:13.252944       1 naming_controller.go:291] Starting NamingConditionController
	E0421 19:30:13.327493       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0421 19:30:13.331699       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0421 19:30:13.332066       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0421 19:30:13.366358       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0421 19:30:13.370685       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0421 19:30:13.379914       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 19:30:13.394924       1 cache.go:39] Caches are synced for autoregister controller
	I0421 19:30:13.405255       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 19:30:13.412631       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 19:30:13.880683       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0421 19:30:14.271113       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0421 19:30:14.826027       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0421 19:30:14.844860       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0421 19:30:14.894786       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0421 19:30:14.920411       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0421 19:30:14.927026       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0421 19:30:15.465646       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0421 19:30:25.987531       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0421 19:30:26.080903       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5c8cf4a374a944ec0dff98c21b1905d0cf7bb870e2e1b693dbb28d658d60afe4] <==
	I0421 19:30:25.975456       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0421 19:30:25.975897       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0421 19:30:25.975982       1 shared_informer.go:262] Caches are synced for job
	I0421 19:30:25.982965       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0421 19:30:25.985434       1 shared_informer.go:262] Caches are synced for namespace
	I0421 19:30:25.985763       1 shared_informer.go:262] Caches are synced for PVC protection
	I0421 19:30:25.990484       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0421 19:30:25.993540       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0421 19:30:25.999350       1 shared_informer.go:262] Caches are synced for daemon sets
	I0421 19:30:26.042537       1 shared_informer.go:262] Caches are synced for taint
	I0421 19:30:26.042643       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0421 19:30:26.042729       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-643468. Assuming now as a timestamp.
	I0421 19:30:26.042778       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0421 19:30:26.042881       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0421 19:30:26.043148       1 event.go:294] "Event occurred" object="test-preload-643468" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-643468 event: Registered Node test-preload-643468 in Controller"
	I0421 19:30:26.090747       1 shared_informer.go:262] Caches are synced for disruption
	I0421 19:30:26.090915       1 disruption.go:371] Sending events to api server.
	I0421 19:30:26.103441       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0421 19:30:26.114953       1 shared_informer.go:262] Caches are synced for deployment
	I0421 19:30:26.138478       1 shared_informer.go:262] Caches are synced for resource quota
	I0421 19:30:26.184487       1 shared_informer.go:262] Caches are synced for resource quota
	I0421 19:30:26.200925       1 shared_informer.go:262] Caches are synced for cronjob
	I0421 19:30:26.618536       1 shared_informer.go:262] Caches are synced for garbage collector
	I0421 19:30:26.676307       1 shared_informer.go:262] Caches are synced for garbage collector
	I0421 19:30:26.676409       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [2e9838510f43594b237818ab56a6d348e835939e193789727bf81fb2286d8c81] <==
	I0421 19:30:15.410635       1 node.go:163] Successfully retrieved node IP: 192.168.39.171
	I0421 19:30:15.410784       1 server_others.go:138] "Detected node IP" address="192.168.39.171"
	I0421 19:30:15.410900       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0421 19:30:15.449243       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0421 19:30:15.449332       1 server_others.go:206] "Using iptables Proxier"
	I0421 19:30:15.450750       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0421 19:30:15.452426       1 server.go:661] "Version info" version="v1.24.4"
	I0421 19:30:15.452493       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:30:15.456270       1 config.go:317] "Starting service config controller"
	I0421 19:30:15.456315       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0421 19:30:15.456351       1 config.go:226] "Starting endpoint slice config controller"
	I0421 19:30:15.456367       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0421 19:30:15.464595       1 config.go:444] "Starting node config controller"
	I0421 19:30:15.483905       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0421 19:30:15.556733       1 shared_informer.go:262] Caches are synced for service config
	I0421 19:30:15.558021       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0421 19:30:15.586790       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [91ff471e5cc044c04fc1190ef9b0b3576d557fcfd8d923117eb6d280655c875a] <==
	I0421 19:30:09.631495       1 serving.go:348] Generated self-signed cert in-memory
	I0421 19:30:13.362665       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0421 19:30:13.362775       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:30:13.374780       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0421 19:30:13.374915       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0421 19:30:13.374999       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0421 19:30:13.375342       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0421 19:30:13.374898       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0421 19:30:13.376851       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0421 19:30:13.377044       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0421 19:30:13.377082       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0421 19:30:13.476120       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0421 19:30:13.477617       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0421 19:30:13.477758       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Apr 21 19:30:13 test-preload-643468 kubelet[1073]: I0421 19:30:13.405428    1073 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-643468"
	Apr 21 19:30:13 test-preload-643468 kubelet[1073]: I0421 19:30:13.411286    1073 setters.go:532] "Node became not ready" node="test-preload-643468" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-21 19:30:13.411236711 +0000 UTC m=+5.480571801 LastTransitionTime:2024-04-21 19:30:13.411236711 +0000 UTC m=+5.480571801 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.083459    1073 apiserver.go:52] "Watching apiserver"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.087192    1073 topology_manager.go:200] "Topology Admit Handler"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.087317    1073 topology_manager.go:200] "Topology Admit Handler"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: E0421 19:30:14.087876    1073 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-x5q6z" podUID=20039141-6e65-4d45-9921-76b6900b3068
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.088085    1073 topology_manager.go:200] "Topology Admit Handler"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.148287    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58-tmp\") pod \"storage-provisioner\" (UID: \"a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58\") " pod="kube-system/storage-provisioner"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.148878    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvk4c\" (UniqueName: \"kubernetes.io/projected/20039141-6e65-4d45-9921-76b6900b3068-kube-api-access-pvk4c\") pod \"coredns-6d4b75cb6d-x5q6z\" (UID: \"20039141-6e65-4d45-9921-76b6900b3068\") " pod="kube-system/coredns-6d4b75cb6d-x5q6z"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.149073    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be6ec3d8-e7b8-44a1-9020-c02b5e49b338-kube-proxy\") pod \"kube-proxy-qtrrk\" (UID: \"be6ec3d8-e7b8-44a1-9020-c02b5e49b338\") " pod="kube-system/kube-proxy-qtrrk"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.149137    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc6ct\" (UniqueName: \"kubernetes.io/projected/be6ec3d8-e7b8-44a1-9020-c02b5e49b338-kube-api-access-xc6ct\") pod \"kube-proxy-qtrrk\" (UID: \"be6ec3d8-e7b8-44a1-9020-c02b5e49b338\") " pod="kube-system/kube-proxy-qtrrk"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.149193    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume\") pod \"coredns-6d4b75cb6d-x5q6z\" (UID: \"20039141-6e65-4d45-9921-76b6900b3068\") " pod="kube-system/coredns-6d4b75cb6d-x5q6z"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.149416    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be6ec3d8-e7b8-44a1-9020-c02b5e49b338-lib-modules\") pod \"kube-proxy-qtrrk\" (UID: \"be6ec3d8-e7b8-44a1-9020-c02b5e49b338\") " pod="kube-system/kube-proxy-qtrrk"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.149566    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pjl\" (UniqueName: \"kubernetes.io/projected/a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58-kube-api-access-76pjl\") pod \"storage-provisioner\" (UID: \"a5601f9c-4a49-4fbe-a4e0-ea5f687e5e58\") " pod="kube-system/storage-provisioner"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.149633    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be6ec3d8-e7b8-44a1-9020-c02b5e49b338-xtables-lock\") pod \"kube-proxy-qtrrk\" (UID: \"be6ec3d8-e7b8-44a1-9020-c02b5e49b338\") " pod="kube-system/kube-proxy-qtrrk"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: I0421 19:30:14.149672    1073 reconciler.go:159] "Reconciler: start to sync state"
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: E0421 19:30:14.254324    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: E0421 19:30:14.254685    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume podName:20039141-6e65-4d45-9921-76b6900b3068 nodeName:}" failed. No retries permitted until 2024-04-21 19:30:14.754645617 +0000 UTC m=+6.823980731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume") pod "coredns-6d4b75cb6d-x5q6z" (UID: "20039141-6e65-4d45-9921-76b6900b3068") : object "kube-system"/"coredns" not registered
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: E0421 19:30:14.756472    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 21 19:30:14 test-preload-643468 kubelet[1073]: E0421 19:30:14.756579    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume podName:20039141-6e65-4d45-9921-76b6900b3068 nodeName:}" failed. No retries permitted until 2024-04-21 19:30:15.756562009 +0000 UTC m=+7.825897099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume") pod "coredns-6d4b75cb6d-x5q6z" (UID: "20039141-6e65-4d45-9921-76b6900b3068") : object "kube-system"/"coredns" not registered
	Apr 21 19:30:15 test-preload-643468 kubelet[1073]: E0421 19:30:15.764041    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 21 19:30:15 test-preload-643468 kubelet[1073]: E0421 19:30:15.764129    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume podName:20039141-6e65-4d45-9921-76b6900b3068 nodeName:}" failed. No retries permitted until 2024-04-21 19:30:17.764110484 +0000 UTC m=+9.833445584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume") pod "coredns-6d4b75cb6d-x5q6z" (UID: "20039141-6e65-4d45-9921-76b6900b3068") : object "kube-system"/"coredns" not registered
	Apr 21 19:30:16 test-preload-643468 kubelet[1073]: E0421 19:30:16.196180    1073 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-x5q6z" podUID=20039141-6e65-4d45-9921-76b6900b3068
	Apr 21 19:30:17 test-preload-643468 kubelet[1073]: E0421 19:30:17.785555    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 21 19:30:17 test-preload-643468 kubelet[1073]: E0421 19:30:17.785651    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume podName:20039141-6e65-4d45-9921-76b6900b3068 nodeName:}" failed. No retries permitted until 2024-04-21 19:30:21.785627363 +0000 UTC m=+13.854962452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20039141-6e65-4d45-9921-76b6900b3068-config-volume") pod "coredns-6d4b75cb6d-x5q6z" (UID: "20039141-6e65-4d45-9921-76b6900b3068") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [7d98d326c0657a32fa4d3428a43d1fb2fa14e52c0a923b7b11608854d14cafb4] <==
	I0421 19:30:15.256567       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-643468 -n test-preload-643468
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-643468 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-643468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-643468
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-643468: (1.0817496s)
--- FAIL: TestPreload (301.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (847.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m32.38090529s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-595552] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-595552" primary control-plane node in "kubernetes-upgrade-595552" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:33:15.006545   47895 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:33:15.006638   47895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:33:15.006649   47895 out.go:304] Setting ErrFile to fd 2...
	I0421 19:33:15.006653   47895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:33:15.006842   47895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:33:15.007394   47895 out.go:298] Setting JSON to false
	I0421 19:33:15.008232   47895 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4493,"bootTime":1713723502,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:33:15.008287   47895 start.go:139] virtualization: kvm guest
	I0421 19:33:15.010650   47895 out.go:177] * [kubernetes-upgrade-595552] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:33:15.011960   47895 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:33:15.011966   47895 notify.go:220] Checking for updates...
	I0421 19:33:15.013237   47895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:33:15.014507   47895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:33:15.015888   47895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:33:15.017208   47895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:33:15.018641   47895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:33:15.020312   47895 config.go:182] Loaded profile config "NoKubernetes-893211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:33:15.020396   47895 config.go:182] Loaded profile config "cert-expiration-942511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:33:15.020481   47895 config.go:182] Loaded profile config "offline-crio-884831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:33:15.020561   47895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:33:15.055180   47895 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 19:33:15.056474   47895 start.go:297] selected driver: kvm2
	I0421 19:33:15.056487   47895 start.go:901] validating driver "kvm2" against <nil>
	I0421 19:33:15.056497   47895 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:33:15.057457   47895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:33:15.057542   47895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:33:15.071912   47895 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:33:15.071959   47895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 19:33:15.072199   47895 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0421 19:33:15.072271   47895 cni.go:84] Creating CNI manager for ""
	I0421 19:33:15.072289   47895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:33:15.072298   47895 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 19:33:15.072370   47895 start.go:340] cluster config:
	{Name:kubernetes-upgrade-595552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-595552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:33:15.072530   47895 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:33:15.074191   47895 out.go:177] * Starting "kubernetes-upgrade-595552" primary control-plane node in "kubernetes-upgrade-595552" cluster
	I0421 19:33:15.075340   47895 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 19:33:15.075377   47895 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:33:15.075399   47895 cache.go:56] Caching tarball of preloaded images
	I0421 19:33:15.075501   47895 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:33:15.075525   47895 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0421 19:33:15.075617   47895 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/config.json ...
	I0421 19:33:15.075638   47895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/config.json: {Name:mk60ab2642146dc0d3b9e3a5e348ee45e78920f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:33:15.075801   47895 start.go:360] acquireMachinesLock for kubernetes-upgrade-595552: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:34:13.555515   47895 start.go:364] duration metric: took 58.479684322s to acquireMachinesLock for "kubernetes-upgrade-595552"
	I0421 19:34:13.555596   47895 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-595552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-595552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:34:13.555720   47895 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 19:34:13.557361   47895 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:34:13.557568   47895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:34:13.557631   47895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:34:13.577430   47895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0421 19:34:13.577879   47895 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:34:13.578531   47895 main.go:141] libmachine: Using API Version  1
	I0421 19:34:13.578557   47895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:34:13.578906   47895 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:34:13.579091   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetMachineName
	I0421 19:34:13.579249   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:13.579398   47895 start.go:159] libmachine.API.Create for "kubernetes-upgrade-595552" (driver="kvm2")
	I0421 19:34:13.579430   47895 client.go:168] LocalClient.Create starting
	I0421 19:34:13.579471   47895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 19:34:13.579509   47895 main.go:141] libmachine: Decoding PEM data...
	I0421 19:34:13.579532   47895 main.go:141] libmachine: Parsing certificate...
	I0421 19:34:13.579597   47895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 19:34:13.579620   47895 main.go:141] libmachine: Decoding PEM data...
	I0421 19:34:13.579639   47895 main.go:141] libmachine: Parsing certificate...
	I0421 19:34:13.579680   47895 main.go:141] libmachine: Running pre-create checks...
	I0421 19:34:13.579698   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .PreCreateCheck
	I0421 19:34:13.580104   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetConfigRaw
	I0421 19:34:13.580497   47895 main.go:141] libmachine: Creating machine...
	I0421 19:34:13.580516   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .Create
	I0421 19:34:13.580654   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Creating KVM machine...
	I0421 19:34:13.582021   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found existing default KVM network
	I0421 19:34:13.583780   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:13.583607   48480 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:75:36} reservation:<nil>}
	I0421 19:34:13.584586   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:13.584497   48480 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:17:96:9f} reservation:<nil>}
	I0421 19:34:13.585485   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:13.585371   48480 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:9b:f9} reservation:<nil>}
	I0421 19:34:13.586626   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:13.586551   48480 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e72f0}
	I0421 19:34:13.586777   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | created network xml: 
	I0421 19:34:13.586800   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | <network>
	I0421 19:34:13.586817   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |   <name>mk-kubernetes-upgrade-595552</name>
	I0421 19:34:13.586839   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |   <dns enable='no'/>
	I0421 19:34:13.586850   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |   
	I0421 19:34:13.586860   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0421 19:34:13.586881   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |     <dhcp>
	I0421 19:34:13.586895   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0421 19:34:13.586908   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |     </dhcp>
	I0421 19:34:13.586918   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |   </ip>
	I0421 19:34:13.586931   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG |   
	I0421 19:34:13.586941   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | </network>
	I0421 19:34:13.586954   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | 
	I0421 19:34:13.592491   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | trying to create private KVM network mk-kubernetes-upgrade-595552 192.168.72.0/24...
	I0421 19:34:13.670920   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | private KVM network mk-kubernetes-upgrade-595552 192.168.72.0/24 created
	I0421 19:34:13.670962   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552 ...
	I0421 19:34:13.670976   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:13.670864   48480 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:34:13.671000   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 19:34:13.671024   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:34:13.899345   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:13.899192   48480 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa...
	I0421 19:34:14.037923   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:14.037789   48480 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/kubernetes-upgrade-595552.rawdisk...
	I0421 19:34:14.037955   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Writing magic tar header
	I0421 19:34:14.037990   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Writing SSH key tar header
	I0421 19:34:14.038038   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:14.037949   48480 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552 ...
	I0421 19:34:14.038092   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552
	I0421 19:34:14.038148   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552 (perms=drwx------)
	I0421 19:34:14.038171   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 19:34:14.038191   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 19:34:14.038204   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 19:34:14.038220   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 19:34:14.038232   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 19:34:14.038248   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Creating domain...
	I0421 19:34:14.038268   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 19:34:14.038281   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:34:14.038293   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 19:34:14.038305   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 19:34:14.038318   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Checking permissions on dir: /home/jenkins
	I0421 19:34:14.038329   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Checking permissions on dir: /home
	I0421 19:34:14.038343   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Skipping /home - not owner
	I0421 19:34:14.039672   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) define libvirt domain using xml: 
	I0421 19:34:14.039692   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) <domain type='kvm'>
	I0421 19:34:14.039703   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   <name>kubernetes-upgrade-595552</name>
	I0421 19:34:14.039712   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   <memory unit='MiB'>2200</memory>
	I0421 19:34:14.039721   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   <vcpu>2</vcpu>
	I0421 19:34:14.039729   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   <features>
	I0421 19:34:14.039755   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <acpi/>
	I0421 19:34:14.039762   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <apic/>
	I0421 19:34:14.039770   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <pae/>
	I0421 19:34:14.039782   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     
	I0421 19:34:14.039801   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   </features>
	I0421 19:34:14.039823   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   <cpu mode='host-passthrough'>
	I0421 19:34:14.039834   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   
	I0421 19:34:14.039842   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   </cpu>
	I0421 19:34:14.039854   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   <os>
	I0421 19:34:14.039865   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <type>hvm</type>
	I0421 19:34:14.039876   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <boot dev='cdrom'/>
	I0421 19:34:14.039887   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <boot dev='hd'/>
	I0421 19:34:14.039904   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <bootmenu enable='no'/>
	I0421 19:34:14.039913   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   </os>
	I0421 19:34:14.039922   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   <devices>
	I0421 19:34:14.039933   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <disk type='file' device='cdrom'>
	I0421 19:34:14.039953   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/boot2docker.iso'/>
	I0421 19:34:14.039964   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <target dev='hdc' bus='scsi'/>
	I0421 19:34:14.039971   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <readonly/>
	I0421 19:34:14.039976   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     </disk>
	I0421 19:34:14.039985   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <disk type='file' device='disk'>
	I0421 19:34:14.039994   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 19:34:14.040015   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/kubernetes-upgrade-595552.rawdisk'/>
	I0421 19:34:14.040023   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <target dev='hda' bus='virtio'/>
	I0421 19:34:14.040032   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     </disk>
	I0421 19:34:14.040046   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <interface type='network'>
	I0421 19:34:14.040054   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <source network='mk-kubernetes-upgrade-595552'/>
	I0421 19:34:14.040058   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <model type='virtio'/>
	I0421 19:34:14.040063   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     </interface>
	I0421 19:34:14.040068   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <interface type='network'>
	I0421 19:34:14.040074   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <source network='default'/>
	I0421 19:34:14.040078   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <model type='virtio'/>
	I0421 19:34:14.040083   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     </interface>
	I0421 19:34:14.040088   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <serial type='pty'>
	I0421 19:34:14.040093   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <target port='0'/>
	I0421 19:34:14.040097   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     </serial>
	I0421 19:34:14.040102   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <console type='pty'>
	I0421 19:34:14.040107   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <target type='serial' port='0'/>
	I0421 19:34:14.040113   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     </console>
	I0421 19:34:14.040120   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     <rng model='virtio'>
	I0421 19:34:14.040129   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)       <backend model='random'>/dev/random</backend>
	I0421 19:34:14.040135   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     </rng>
	I0421 19:34:14.040143   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     
	I0421 19:34:14.040150   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)     
	I0421 19:34:14.040158   47895 main.go:141] libmachine: (kubernetes-upgrade-595552)   </devices>
	I0421 19:34:14.040165   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) </domain>
	I0421 19:34:14.040175   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) 
	I0421 19:34:14.048204   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:f6:a6:40 in network default
	I0421 19:34:14.048976   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Ensuring networks are active...
	I0421 19:34:14.049009   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:14.049861   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Ensuring network default is active
	I0421 19:34:14.050352   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Ensuring network mk-kubernetes-upgrade-595552 is active
	I0421 19:34:14.051634   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Getting domain xml...
	I0421 19:34:14.052395   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Creating domain...
	I0421 19:34:15.432531   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Waiting to get IP...
	I0421 19:34:15.433328   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:15.433800   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:15.433827   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:15.433754   48480 retry.go:31] will retry after 217.433862ms: waiting for machine to come up
	I0421 19:34:15.653411   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:15.654030   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:15.654068   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:15.653997   48480 retry.go:31] will retry after 379.206798ms: waiting for machine to come up
	I0421 19:34:16.034602   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:16.035085   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:16.035109   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:16.035070   48480 retry.go:31] will retry after 302.999649ms: waiting for machine to come up
	I0421 19:34:16.340049   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:16.340905   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:16.340941   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:16.340841   48480 retry.go:31] will retry after 435.461147ms: waiting for machine to come up
	I0421 19:34:16.777494   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:16.778133   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:16.778198   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:16.778019   48480 retry.go:31] will retry after 729.058557ms: waiting for machine to come up
	I0421 19:34:17.508526   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:17.509105   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:17.509152   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:17.509071   48480 retry.go:31] will retry after 694.923174ms: waiting for machine to come up
	I0421 19:34:18.206086   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:18.206579   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:18.206611   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:18.206545   48480 retry.go:31] will retry after 1.161164259s: waiting for machine to come up
	I0421 19:34:19.369557   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:19.370072   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:19.370100   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:19.370019   48480 retry.go:31] will retry after 1.488832605s: waiting for machine to come up
	I0421 19:34:20.860333   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:20.860831   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:20.860865   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:20.860769   48480 retry.go:31] will retry after 1.609186357s: waiting for machine to come up
	I0421 19:34:22.471489   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:22.471980   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:22.472008   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:22.471922   48480 retry.go:31] will retry after 1.890865764s: waiting for machine to come up
	I0421 19:34:24.364805   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:24.365329   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:24.365357   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:24.365274   48480 retry.go:31] will retry after 2.372300047s: waiting for machine to come up
	I0421 19:34:26.739325   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:26.739881   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:26.739909   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:26.739815   48480 retry.go:31] will retry after 2.559935953s: waiting for machine to come up
	I0421 19:34:29.301598   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:29.302080   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:29.302102   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:29.302025   48480 retry.go:31] will retry after 3.966049745s: waiting for machine to come up
	I0421 19:34:33.272690   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:33.273190   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find current IP address of domain kubernetes-upgrade-595552 in network mk-kubernetes-upgrade-595552
	I0421 19:34:33.273213   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | I0421 19:34:33.273134   48480 retry.go:31] will retry after 5.372156919s: waiting for machine to come up
	I0421 19:34:38.646802   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.647372   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Found IP for machine: 192.168.72.31
	I0421 19:34:38.647397   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Reserving static IP address...
	I0421 19:34:38.647427   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has current primary IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.647877   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-595552", mac: "52:54:00:8b:bd:15", ip: "192.168.72.31"} in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.726876   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Getting to WaitForSSH function...
	I0421 19:34:38.726900   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Reserved static IP address: 192.168.72.31
	I0421 19:34:38.726916   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Waiting for SSH to be available...
	I0421 19:34:38.729624   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.730153   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:38.730190   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.730352   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Using SSH client type: external
	I0421 19:34:38.730374   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa (-rw-------)
	I0421 19:34:38.730424   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:34:38.730442   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | About to run SSH command:
	I0421 19:34:38.730458   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | exit 0
	I0421 19:34:38.858393   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | SSH cmd err, output: <nil>: 
	I0421 19:34:38.858652   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) KVM machine creation complete!
	I0421 19:34:38.858934   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetConfigRaw
	I0421 19:34:38.859431   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:38.859638   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:38.859810   47895 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 19:34:38.859825   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetState
	I0421 19:34:38.861442   47895 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 19:34:38.861455   47895 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 19:34:38.861460   47895 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 19:34:38.861465   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:38.863931   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.864355   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:38.864387   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.864590   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:38.864796   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:38.864978   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:38.865136   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:38.865302   47895 main.go:141] libmachine: Using SSH client type: native
	I0421 19:34:38.865476   47895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.31 22 <nil> <nil>}
	I0421 19:34:38.865487   47895 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 19:34:38.973607   47895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:34:38.973635   47895 main.go:141] libmachine: Detecting the provisioner...
	I0421 19:34:38.973646   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:38.976347   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.976754   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:38.976784   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:38.976968   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:38.977141   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:38.977277   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:38.977388   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:38.977562   47895 main.go:141] libmachine: Using SSH client type: native
	I0421 19:34:38.977759   47895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.31 22 <nil> <nil>}
	I0421 19:34:38.977774   47895 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 19:34:39.087817   47895 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 19:34:39.087934   47895 main.go:141] libmachine: found compatible host: buildroot
	I0421 19:34:39.087948   47895 main.go:141] libmachine: Provisioning with buildroot...
	I0421 19:34:39.087960   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetMachineName
	I0421 19:34:39.088191   47895 buildroot.go:166] provisioning hostname "kubernetes-upgrade-595552"
	I0421 19:34:39.088216   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetMachineName
	I0421 19:34:39.088365   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:39.090919   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.091172   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.091196   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.091344   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:39.091515   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.091676   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.091838   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:39.092000   47895 main.go:141] libmachine: Using SSH client type: native
	I0421 19:34:39.092181   47895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.31 22 <nil> <nil>}
	I0421 19:34:39.092196   47895 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-595552 && echo "kubernetes-upgrade-595552" | sudo tee /etc/hostname
	I0421 19:34:39.224259   47895 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-595552
	
	I0421 19:34:39.224298   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:39.227425   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.227744   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.227775   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.227961   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:39.228163   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.228328   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.228487   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:39.228670   47895 main.go:141] libmachine: Using SSH client type: native
	I0421 19:34:39.228856   47895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.31 22 <nil> <nil>}
	I0421 19:34:39.228879   47895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-595552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-595552/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-595552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:34:39.350397   47895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:34:39.350423   47895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:34:39.350458   47895 buildroot.go:174] setting up certificates
	I0421 19:34:39.350468   47895 provision.go:84] configureAuth start
	I0421 19:34:39.350478   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetMachineName
	I0421 19:34:39.350752   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetIP
	I0421 19:34:39.353272   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.353642   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.353669   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.353782   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:39.356003   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.356344   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.356373   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.356482   47895 provision.go:143] copyHostCerts
	I0421 19:34:39.356540   47895 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:34:39.356572   47895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:34:39.356642   47895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:34:39.356740   47895 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:34:39.356750   47895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:34:39.356771   47895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:34:39.356827   47895 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:34:39.356834   47895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:34:39.356850   47895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:34:39.356889   47895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-595552 san=[127.0.0.1 192.168.72.31 kubernetes-upgrade-595552 localhost minikube]
	I0421 19:34:39.419016   47895 provision.go:177] copyRemoteCerts
	I0421 19:34:39.419088   47895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:34:39.419113   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:39.421778   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.422128   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.422158   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.422346   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:39.422535   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.422703   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:39.422880   47895 sshutil.go:53] new ssh client: &{IP:192.168.72.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa Username:docker}
	I0421 19:34:39.514259   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:34:39.545617   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:34:39.575228   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0421 19:34:39.606240   47895 provision.go:87] duration metric: took 255.736315ms to configureAuth
	I0421 19:34:39.606288   47895 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:34:39.606507   47895 config.go:182] Loaded profile config "kubernetes-upgrade-595552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0421 19:34:39.606583   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:39.609293   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.609723   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.609764   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.609945   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:39.610187   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.610350   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.610501   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:39.610659   47895 main.go:141] libmachine: Using SSH client type: native
	I0421 19:34:39.610886   47895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.31 22 <nil> <nil>}
	I0421 19:34:39.610912   47895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:34:39.979220   47895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:34:39.979250   47895 main.go:141] libmachine: Checking connection to Docker...
	I0421 19:34:39.979260   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetURL
	I0421 19:34:39.980571   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Using libvirt version 6000000
	I0421 19:34:39.982760   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.983079   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.983110   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.983247   47895 main.go:141] libmachine: Docker is up and running!
	I0421 19:34:39.983264   47895 main.go:141] libmachine: Reticulating splines...
	I0421 19:34:39.983271   47895 client.go:171] duration metric: took 26.403834485s to LocalClient.Create
	I0421 19:34:39.983291   47895 start.go:167] duration metric: took 26.403894979s to libmachine.API.Create "kubernetes-upgrade-595552"
	I0421 19:34:39.983303   47895 start.go:293] postStartSetup for "kubernetes-upgrade-595552" (driver="kvm2")
	I0421 19:34:39.983315   47895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:34:39.983333   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:39.983558   47895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:34:39.983586   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:39.985785   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.986566   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:39.986594   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:39.986599   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:39.986753   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:39.986912   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:39.987045   47895 sshutil.go:53] new ssh client: &{IP:192.168.72.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa Username:docker}
	I0421 19:34:40.074000   47895 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:34:40.078958   47895 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:34:40.078982   47895 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:34:40.079045   47895 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:34:40.079130   47895 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:34:40.079254   47895 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:34:40.090029   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:34:40.118950   47895 start.go:296] duration metric: took 135.633267ms for postStartSetup
	I0421 19:34:40.118993   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetConfigRaw
	I0421 19:34:40.119521   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetIP
	I0421 19:34:40.122090   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.122438   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:40.122459   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.122666   47895 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/config.json ...
	I0421 19:34:40.122878   47895 start.go:128] duration metric: took 26.567145708s to createHost
	I0421 19:34:40.122899   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:40.125210   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.125546   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:40.125587   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.125704   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:40.125896   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:40.126085   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:40.126237   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:40.126408   47895 main.go:141] libmachine: Using SSH client type: native
	I0421 19:34:40.126619   47895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.31 22 <nil> <nil>}
	I0421 19:34:40.126634   47895 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0421 19:34:40.239516   47895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713728080.224601868
	
	I0421 19:34:40.239540   47895 fix.go:216] guest clock: 1713728080.224601868
	I0421 19:34:40.239562   47895 fix.go:229] Guest: 2024-04-21 19:34:40.224601868 +0000 UTC Remote: 2024-04-21 19:34:40.122890405 +0000 UTC m=+85.164777492 (delta=101.711463ms)
	I0421 19:34:40.239589   47895 fix.go:200] guest clock delta is within tolerance: 101.711463ms
	I0421 19:34:40.239596   47895 start.go:83] releasing machines lock for "kubernetes-upgrade-595552", held for 26.684033534s
	I0421 19:34:40.239625   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:40.239918   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetIP
	I0421 19:34:40.242652   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.243048   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:40.243096   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.243247   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:40.243733   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:40.243902   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:34:40.243996   47895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:34:40.244031   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:40.244087   47895 ssh_runner.go:195] Run: cat /version.json
	I0421 19:34:40.244113   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:34:40.246798   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.247190   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.247409   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:40.247493   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.247689   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:40.247766   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:40.247796   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:40.247875   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:40.248016   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:34:40.248130   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:40.248246   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:34:40.248407   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:34:40.248451   47895 sshutil.go:53] new ssh client: &{IP:192.168.72.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa Username:docker}
	I0421 19:34:40.248554   47895 sshutil.go:53] new ssh client: &{IP:192.168.72.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa Username:docker}
	I0421 19:34:40.337146   47895 ssh_runner.go:195] Run: systemctl --version
	I0421 19:34:40.362147   47895 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:34:40.533084   47895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:34:40.541323   47895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:34:40.541467   47895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:34:40.560785   47895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:34:40.560813   47895 start.go:494] detecting cgroup driver to use...
	I0421 19:34:40.560894   47895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:34:40.582347   47895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:34:40.602564   47895 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:34:40.602632   47895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:34:40.619638   47895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:34:40.637846   47895 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:34:40.772498   47895 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:34:40.952810   47895 docker.go:233] disabling docker service ...
	I0421 19:34:40.952871   47895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:34:40.972652   47895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:34:40.989743   47895 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:34:41.164186   47895 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:34:41.321857   47895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:34:41.340622   47895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:34:41.363634   47895 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0421 19:34:41.363702   47895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:34:41.377453   47895 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:34:41.377517   47895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:34:41.391585   47895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:34:41.405711   47895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:34:41.419052   47895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:34:41.433071   47895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:34:41.446897   47895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:34:41.446963   47895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:34:41.463192   47895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:34:41.476199   47895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:34:41.630120   47895 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:34:41.796704   47895 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:34:41.796788   47895 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:34:41.802745   47895 start.go:562] Will wait 60s for crictl version
	I0421 19:34:41.802812   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:41.808555   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:34:41.854855   47895 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:34:41.854947   47895 ssh_runner.go:195] Run: crio --version
	I0421 19:34:41.893628   47895 ssh_runner.go:195] Run: crio --version
	I0421 19:34:41.933724   47895 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0421 19:34:41.935270   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetIP
	I0421 19:34:41.938497   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:41.938953   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:34:41.938978   47895 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:34:41.939254   47895 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0421 19:34:41.945586   47895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:34:41.963700   47895 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-595552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-595552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.31 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:34:41.963820   47895 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 19:34:41.963888   47895 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:34:42.008359   47895 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0421 19:34:42.008427   47895 ssh_runner.go:195] Run: which lz4
	I0421 19:34:42.013340   47895 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0421 19:34:42.018358   47895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:34:42.018392   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0421 19:34:44.107375   47895 crio.go:462] duration metric: took 2.094076348s to copy over tarball
	I0421 19:34:44.107452   47895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 19:34:47.210959   47895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.103461282s)
	I0421 19:34:47.210994   47895 crio.go:469] duration metric: took 3.103591403s to extract the tarball
	I0421 19:34:47.211002   47895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 19:34:47.266944   47895 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:34:47.318294   47895 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0421 19:34:47.318324   47895 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0421 19:34:47.318399   47895 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:34:47.318421   47895 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:34:47.318429   47895 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0421 19:34:47.318394   47895 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:34:47.318451   47895 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:34:47.318430   47895 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:34:47.318451   47895 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0421 19:34:47.318627   47895 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:34:47.319692   47895 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:34:47.320064   47895 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:34:47.320075   47895 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0421 19:34:47.320080   47895 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:34:47.320070   47895 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:34:47.320081   47895 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:34:47.320145   47895 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:34:47.320147   47895 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0421 19:34:47.480412   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:34:47.488763   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:34:47.489365   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0421 19:34:47.510245   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:34:47.552113   47895 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0421 19:34:47.552135   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:34:47.552154   47895 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:34:47.552203   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:47.620073   47895 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0421 19:34:47.620135   47895 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:34:47.620179   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:47.620361   47895 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0421 19:34:47.620385   47895 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:34:47.620426   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:47.620419   47895 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0421 19:34:47.620455   47895 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:34:47.620494   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:47.642454   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:34:47.642511   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:34:47.642550   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0421 19:34:47.642584   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:34:47.642707   47895 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0421 19:34:47.642744   47895 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:34:47.642774   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:47.643470   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0421 19:34:47.645444   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0421 19:34:47.743074   47895 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0421 19:34:47.796680   47895 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0421 19:34:47.796703   47895 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0421 19:34:47.796796   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:34:47.796808   47895 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0421 19:34:47.804142   47895 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0421 19:34:47.804199   47895 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0421 19:34:47.804243   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:47.820848   47895 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0421 19:34:47.820891   47895 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0421 19:34:47.820953   47895 ssh_runner.go:195] Run: which crictl
	I0421 19:34:47.847835   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0421 19:34:47.847884   47895 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0421 19:34:47.847925   47895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0421 19:34:47.899450   47895 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0421 19:34:47.908401   47895 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0421 19:34:48.393562   47895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:34:48.552238   47895 cache_images.go:92] duration metric: took 1.233895029s to LoadCachedImages
	W0421 19:34:48.552342   47895 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0421 19:34:48.552377   47895 kubeadm.go:928] updating node { 192.168.72.31 8443 v1.20.0 crio true true} ...
	I0421 19:34:48.552506   47895 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-595552 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-595552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:34:48.552600   47895 ssh_runner.go:195] Run: crio config
	I0421 19:34:48.609451   47895 cni.go:84] Creating CNI manager for ""
	I0421 19:34:48.609475   47895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:34:48.609488   47895 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:34:48.609505   47895 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.31 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-595552 NodeName:kubernetes-upgrade-595552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0421 19:34:48.609635   47895 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-595552"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:34:48.609711   47895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0421 19:34:48.622318   47895 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:34:48.622386   47895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 19:34:48.633998   47895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0421 19:34:48.655260   47895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:34:48.678021   47895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0421 19:34:48.702036   47895 ssh_runner.go:195] Run: grep 192.168.72.31	control-plane.minikube.internal$ /etc/hosts
	I0421 19:34:48.706956   47895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:34:48.724632   47895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:34:48.880557   47895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:34:48.900830   47895 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552 for IP: 192.168.72.31
	I0421 19:34:48.900852   47895 certs.go:194] generating shared ca certs ...
	I0421 19:34:48.900867   47895 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:34:48.901019   47895 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 19:34:48.901067   47895 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 19:34:48.901080   47895 certs.go:256] generating profile certs ...
	I0421 19:34:48.901163   47895 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.key
	I0421 19:34:48.901184   47895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.crt with IP's: []
	I0421 19:34:49.360956   47895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.crt ...
	I0421 19:34:49.360985   47895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.crt: {Name:mke334dffead9302557aea6e5e1a4fe78653d61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:34:49.361154   47895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.key ...
	I0421 19:34:49.361178   47895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.key: {Name:mkf03a08b93db67d637c18447f263c80595005ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:34:49.361308   47895 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.key.d716dd45
	I0421 19:34:49.361331   47895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.crt.d716dd45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.31]
	I0421 19:34:49.487720   47895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.crt.d716dd45 ...
	I0421 19:34:49.487760   47895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.crt.d716dd45: {Name:mkf2722aafb676d85f821cdc9c7cec8386614889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:34:49.487983   47895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.key.d716dd45 ...
	I0421 19:34:49.488003   47895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.key.d716dd45: {Name:mkf67b0ed97800860b13081b69bbd065342a7772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:34:49.488114   47895 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.crt.d716dd45 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.crt
	I0421 19:34:49.488234   47895 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.key.d716dd45 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.key
	I0421 19:34:49.488318   47895 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.key
	I0421 19:34:49.488341   47895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.crt with IP's: []
	I0421 19:34:49.646106   47895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.crt ...
	I0421 19:34:49.646132   47895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.crt: {Name:mk018a48c111f435cbeeafa77d60a35c13aca80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:34:49.669487   47895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.key ...
	I0421 19:34:49.669530   47895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.key: {Name:mka84772b99cb05c236f158da06aa17931a94943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:34:49.669776   47895 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 19:34:49.669826   47895 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 19:34:49.669837   47895 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 19:34:49.669893   47895 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 19:34:49.669942   47895 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 19:34:49.669982   47895 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 19:34:49.670041   47895 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:34:49.670812   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:34:49.704988   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:34:49.738518   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:34:49.773582   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:34:49.809743   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0421 19:34:49.844093   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:34:49.872807   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:34:49.913442   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 19:34:49.943407   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:34:49.973244   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 19:34:50.004379   47895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 19:34:50.035387   47895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:34:50.097185   47895 ssh_runner.go:195] Run: openssl version
	I0421 19:34:50.106254   47895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:34:50.123330   47895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:34:50.129432   47895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:34:50.129501   47895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:34:50.136445   47895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:34:50.150851   47895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 19:34:50.165466   47895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 19:34:50.171336   47895 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:34:50.171397   47895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 19:34:50.178402   47895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 19:34:50.192958   47895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 19:34:50.207416   47895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 19:34:50.213438   47895 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:34:50.213491   47895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 19:34:50.220700   47895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:34:50.234812   47895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:34:50.240115   47895 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:34:50.240174   47895 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-595552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-595552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.31 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:34:50.240259   47895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 19:34:50.240317   47895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:34:50.285987   47895 cri.go:89] found id: ""
	I0421 19:34:50.286074   47895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 19:34:50.299286   47895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:34:50.312571   47895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:34:50.325504   47895 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:34:50.325541   47895 kubeadm.go:156] found existing configuration files:
	
	I0421 19:34:50.325603   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:34:50.339332   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:34:50.339402   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:34:50.353488   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:34:50.369251   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:34:50.369340   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:34:50.385490   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:34:50.399191   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:34:50.399271   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:34:50.412798   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:34:50.425533   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:34:50.425607   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:34:50.442670   47895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:34:50.588395   47895 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:34:50.588973   47895 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:34:50.826327   47895 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:34:50.826480   47895 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:34:50.826599   47895 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:34:51.049372   47895 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:34:51.052606   47895 out.go:204]   - Generating certificates and keys ...
	I0421 19:34:51.052693   47895 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:34:51.052802   47895 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:34:51.168299   47895 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 19:34:51.320382   47895 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 19:34:51.375772   47895 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 19:34:51.463688   47895 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 19:34:51.561183   47895 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 19:34:51.561385   47895 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-595552 localhost] and IPs [192.168.72.31 127.0.0.1 ::1]
	I0421 19:34:51.770222   47895 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 19:34:51.773841   47895 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-595552 localhost] and IPs [192.168.72.31 127.0.0.1 ::1]
	I0421 19:34:52.038026   47895 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 19:34:52.127519   47895 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 19:34:52.304352   47895 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 19:34:52.304538   47895 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:34:52.562014   47895 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:34:52.774656   47895 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:34:53.067405   47895 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:34:53.187654   47895 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:34:53.208127   47895 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:34:53.209163   47895 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:34:53.209210   47895 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:34:53.348470   47895 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:34:53.350264   47895 out.go:204]   - Booting up control plane ...
	I0421 19:34:53.350407   47895 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:34:53.358641   47895 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:34:53.365466   47895 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:34:53.366989   47895 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:34:53.373626   47895 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:35:33.371403   47895 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:35:33.371844   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:35:33.372194   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:35:38.372915   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:35:38.373153   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:35:48.374137   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:35:48.374418   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:36:08.375606   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:36:08.376144   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:36:48.376139   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:36:48.376439   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:36:48.376469   47895 kubeadm.go:309] 
	I0421 19:36:48.376538   47895 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:36:48.376592   47895 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:36:48.376599   47895 kubeadm.go:309] 
	I0421 19:36:48.376643   47895 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:36:48.376688   47895 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:36:48.376826   47895 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:36:48.376837   47895 kubeadm.go:309] 
	I0421 19:36:48.376994   47895 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:36:48.377070   47895 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:36:48.377125   47895 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:36:48.377153   47895 kubeadm.go:309] 
	I0421 19:36:48.377303   47895 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:36:48.377416   47895 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:36:48.377448   47895 kubeadm.go:309] 
	I0421 19:36:48.377590   47895 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:36:48.377706   47895 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:36:48.377812   47895 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:36:48.377909   47895 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:36:48.377919   47895 kubeadm.go:309] 
	I0421 19:36:48.379346   47895 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:36:48.379468   47895 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:36:48.379568   47895 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0421 19:36:48.379725   47895 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-595552 localhost] and IPs [192.168.72.31 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-595552 localhost] and IPs [192.168.72.31 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-595552 localhost] and IPs [192.168.72.31 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-595552 localhost] and IPs [192.168.72.31 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:36:48.379799   47895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:36:49.769280   47895 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389447187s)
	I0421 19:36:49.769369   47895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:36:49.788563   47895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:36:49.800679   47895 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:36:49.800714   47895 kubeadm.go:156] found existing configuration files:
	
	I0421 19:36:49.800796   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:36:49.811452   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:36:49.811524   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:36:49.826141   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:36:49.838038   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:36:49.838123   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:36:49.849507   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:36:49.860364   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:36:49.860440   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:36:49.871712   47895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:36:49.882424   47895 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:36:49.882491   47895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:36:49.893414   47895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:36:49.977488   47895 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:36:49.977625   47895 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:36:50.149404   47895 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:36:50.149636   47895 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:36:50.149802   47895 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:36:50.424594   47895 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:36:50.426114   47895 out.go:204]   - Generating certificates and keys ...
	I0421 19:36:50.426221   47895 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:36:50.426314   47895 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:36:50.426415   47895 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:36:50.426490   47895 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:36:50.427022   47895 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:36:50.427326   47895 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:36:50.427994   47895 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:36:50.428442   47895 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:36:50.429078   47895 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:36:50.429474   47895 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:36:50.429630   47895 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:36:50.429715   47895 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:36:50.813180   47895 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:36:51.022680   47895 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:36:51.185984   47895 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:36:51.437165   47895 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:36:51.458110   47895 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:36:51.459721   47895 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:36:51.459788   47895 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:36:51.635108   47895 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:36:51.636857   47895 out.go:204]   - Booting up control plane ...
	I0421 19:36:51.636960   47895 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:36:51.645894   47895 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:36:51.648388   47895 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:36:51.660183   47895 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:36:51.669981   47895 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:37:31.672662   47895 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:37:31.672902   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:37:31.673134   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:37:36.673845   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:37:36.674119   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:37:46.674972   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:37:46.675252   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:38:06.676608   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:38:06.676880   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:38:46.676178   47895 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:38:46.676411   47895 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:38:46.676434   47895 kubeadm.go:309] 
	I0421 19:38:46.676500   47895 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:38:46.676564   47895 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:38:46.676575   47895 kubeadm.go:309] 
	I0421 19:38:46.676629   47895 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:38:46.676700   47895 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:38:46.676837   47895 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:38:46.676853   47895 kubeadm.go:309] 
	I0421 19:38:46.677020   47895 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:38:46.677082   47895 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:38:46.677123   47895 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:38:46.677135   47895 kubeadm.go:309] 
	I0421 19:38:46.677268   47895 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:38:46.677374   47895 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:38:46.677385   47895 kubeadm.go:309] 
	I0421 19:38:46.677516   47895 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:38:46.677625   47895 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:38:46.677724   47895 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:38:46.677821   47895 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:38:46.677836   47895 kubeadm.go:309] 
	I0421 19:38:46.678933   47895 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:38:46.679007   47895 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:38:46.679063   47895 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:38:46.679142   47895 kubeadm.go:393] duration metric: took 3m56.438970514s to StartCluster
	I0421 19:38:46.679200   47895 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:38:46.679266   47895 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:38:46.733371   47895 cri.go:89] found id: ""
	I0421 19:38:46.733397   47895 logs.go:276] 0 containers: []
	W0421 19:38:46.733405   47895 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:38:46.733411   47895 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:38:46.733467   47895 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:38:46.780046   47895 cri.go:89] found id: ""
	I0421 19:38:46.780080   47895 logs.go:276] 0 containers: []
	W0421 19:38:46.780089   47895 logs.go:278] No container was found matching "etcd"
	I0421 19:38:46.780095   47895 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:38:46.780150   47895 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:38:46.819899   47895 cri.go:89] found id: ""
	I0421 19:38:46.819930   47895 logs.go:276] 0 containers: []
	W0421 19:38:46.819938   47895 logs.go:278] No container was found matching "coredns"
	I0421 19:38:46.819943   47895 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:38:46.820027   47895 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:38:46.864678   47895 cri.go:89] found id: ""
	I0421 19:38:46.864710   47895 logs.go:276] 0 containers: []
	W0421 19:38:46.864729   47895 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:38:46.864737   47895 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:38:46.864802   47895 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:38:46.904652   47895 cri.go:89] found id: ""
	I0421 19:38:46.904682   47895 logs.go:276] 0 containers: []
	W0421 19:38:46.904693   47895 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:38:46.904700   47895 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:38:46.904780   47895 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:38:46.943424   47895 cri.go:89] found id: ""
	I0421 19:38:46.943452   47895 logs.go:276] 0 containers: []
	W0421 19:38:46.943461   47895 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:38:46.943467   47895 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:38:46.943525   47895 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:38:46.981942   47895 cri.go:89] found id: ""
	I0421 19:38:46.981976   47895 logs.go:276] 0 containers: []
	W0421 19:38:46.981987   47895 logs.go:278] No container was found matching "kindnet"
	I0421 19:38:46.981999   47895 logs.go:123] Gathering logs for container status ...
	I0421 19:38:46.982016   47895 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:38:47.028130   47895 logs.go:123] Gathering logs for kubelet ...
	I0421 19:38:47.028156   47895 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:38:47.085736   47895 logs.go:123] Gathering logs for dmesg ...
	I0421 19:38:47.085777   47895 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:38:47.102146   47895 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:38:47.102173   47895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:38:47.217927   47895 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:38:47.217948   47895 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:38:47.217965   47895 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0421 19:38:47.321591   47895 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:38:47.321653   47895 out.go:239] * 
	* 
	W0421 19:38:47.321729   47895 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:38:47.321763   47895 out.go:239] * 
	* 
	W0421 19:38:47.322573   47895 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:38:47.326189   47895 out.go:177] 
	W0421 19:38:47.327391   47895 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:38:47.327445   47895 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:38:47.327472   47895 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:38:47.328982   47895 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-595552
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-595552: (3.607422853s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-595552 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-595552 status --format={{.Host}}: exit status 7 (75.108545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0421 19:39:06.204851   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.368670925s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-595552 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (88.333225ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-595552] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-595552
	    minikube start -p kubernetes-upgrade-595552 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5955522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-595552 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0421 19:41:09.208579   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-595552 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (7m7.318894645s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-21 19:47:19.899184317 +0000 UTC m=+5148.423319755
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-595552 -n kubernetes-upgrade-595552
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-595552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-595552 logs -n 25: (1.104354353s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:38 UTC | 21 Apr 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | cert-options-015184 ssh                                | cert-options-015184          | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-015184 -- sudo                         | cert-options-015184          | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-015184                                 | cert-options-015184          | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| pause   | -p pause-321307                                        | pause-321307                 | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-321307                                        | pause-321307                 | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-321307                                        | pause-321307                 | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-321307                                        | pause-321307                 | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-321307                                        | pause-321307                 | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:39 UTC | 21 Apr 24 19:41 UTC |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:40 UTC | 21 Apr 24 19:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-167454  | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:41 UTC | 21 Apr 24 19:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:41 UTC |                     |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-597568             | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:42 UTC | 21 Apr 24 19:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-867585        | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-167454       | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-597568                  | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:45:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:45:14.424926   58211 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:45:14.425056   58211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:45:14.425066   58211 out.go:304] Setting ErrFile to fd 2...
	I0421 19:45:14.425072   58211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:45:14.425272   58211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:45:14.425841   58211 out.go:298] Setting JSON to false
	I0421 19:45:14.426828   58211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5212,"bootTime":1713723502,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:45:14.426890   58211 start.go:139] virtualization: kvm guest
	I0421 19:45:14.429358   58211 out.go:177] * [old-k8s-version-867585] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:45:14.430916   58211 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:45:14.432394   58211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:45:14.430972   58211 notify.go:220] Checking for updates...
	I0421 19:45:14.435106   58211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:45:14.436519   58211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:45:14.438014   58211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:45:14.439322   58211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:45:14.440980   58211 config.go:182] Loaded profile config "old-k8s-version-867585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0421 19:45:14.441380   58211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:45:14.441414   58211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:45:14.456399   58211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40747
	I0421 19:45:14.456806   58211 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:45:14.457431   58211 main.go:141] libmachine: Using API Version  1
	I0421 19:45:14.457464   58211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:45:14.457901   58211 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:45:14.458214   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:45:14.460134   58211 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0421 19:45:14.461518   58211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:45:14.461812   58211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:45:14.461845   58211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:45:14.476476   58211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0421 19:45:14.476892   58211 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:45:14.477414   58211 main.go:141] libmachine: Using API Version  1
	I0421 19:45:14.477439   58211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:45:14.477742   58211 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:45:14.477956   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:45:14.515718   58211 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:45:14.517040   58211 start.go:297] selected driver: kvm2
	I0421 19:45:14.517057   58211 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:45:14.517196   58211 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:45:14.517976   58211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:45:14.518091   58211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:45:14.533568   58211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:45:14.535730   58211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:45:14.535792   58211 cni.go:84] Creating CNI manager for ""
	I0421 19:45:14.535805   58211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:45:14.535871   58211 start.go:340] cluster config:
	{Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:45:14.536005   58211 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:45:14.538849   58211 out.go:177] * Starting "old-k8s-version-867585" primary control-plane node in "old-k8s-version-867585" cluster
	I0421 19:45:13.971240   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.041082722s)
	W0421 19:45:13.971289   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:13.949106    6439 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:13Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:13.949106    6439 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:13Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:13.971300   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:45:13.971318   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:14.028698   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:45:14.028730   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:14.071910   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:45:14.071935   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:45:14.125585   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:45:14.125608   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:14.182539   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:45:14.182570   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:14.237597   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:45:14.237627   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:14.290945   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:45:14.290982   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:14.342147   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:45:14.342185   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:14.403984   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:45:14.404014   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:14.456197   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:45:14.456225   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:14.504250   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:45:14.504288   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:14.586412   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:45:14.586446   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:45:14.963893   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:45:14.963938   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:45:15.059406   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:45:15.059445   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:45:15.077026   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:45:15.077067   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:45:15.155323   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:45:14.434264   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:14.540193   58211 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 19:45:14.540239   58211 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:45:14.540249   58211 cache.go:56] Caching tarball of preloaded images
	I0421 19:45:14.540364   58211 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:45:14.540402   58211 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0421 19:45:14.540534   58211 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/config.json ...
	I0421 19:45:14.540764   58211 start.go:360] acquireMachinesLock for old-k8s-version-867585: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:45:17.656224   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:45:17.673877   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:45:17.673957   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:45:17.717463   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:17.717489   55975 cri.go:89] found id: ""
	I0421 19:45:17.717499   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:45:17.717553   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.722917   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:45:17.722996   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:45:17.765308   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:17.765328   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:17.765332   55975 cri.go:89] found id: ""
	I0421 19:45:17.765339   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:45:17.765397   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.770866   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.775777   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:45:17.775843   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:45:17.820262   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:17.820289   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:17.820293   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:17.820296   55975 cri.go:89] found id: ""
	I0421 19:45:17.820303   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:45:17.820356   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.825093   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.829645   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.834166   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:45:17.834229   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:45:17.883866   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:17.883890   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:17.883894   55975 cri.go:89] found id: ""
	I0421 19:45:17.883900   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:45:17.883947   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.889166   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.893761   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:45:17.893828   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:45:17.933833   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:17.933857   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:17.933865   55975 cri.go:89] found id: ""
	I0421 19:45:17.933872   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:45:17.933924   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.939038   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.943780   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:45:17.943850   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:45:17.985754   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:17.985776   55975 cri.go:89] found id: ""
	I0421 19:45:17.985783   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:45:17.985824   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:17.990925   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:45:17.990981   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:45:18.042005   55975 cri.go:89] found id: ""
	I0421 19:45:18.042030   55975 logs.go:276] 0 containers: []
	W0421 19:45:18.042038   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:45:18.042049   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:45:18.042114   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:45:18.082732   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:18.082761   55975 cri.go:89] found id: ""
	I0421 19:45:18.082772   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:45:18.082827   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:18.087820   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:45:18.087846   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:18.128906   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:45:18.128936   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:45:18.508509   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:45:18.508550   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:18.552184   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:45:18.552219   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:45:18.603789   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:45:18.603817   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:18.653554   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:45:18.653581   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:18.704641   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:45:18.704667   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:18.784763   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:45:18.784798   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:18.822991   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:45:18.823021   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:18.875312   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:45:18.875345   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:18.922799   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:45:18.922834   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:20.962856   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.040003367s)
	W0421 19:45:20.962905   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:20.940290    6651 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:20Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:20.940290    6651 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:20Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:20.962915   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:45:20.962929   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:21.004024   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:45:21.004055   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:45:21.099893   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:45:21.099929   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:45:21.117000   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:45:21.117045   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:45:21.197191   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:45:21.197215   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:45:21.197228   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:21.241767   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:45:21.241801   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:20.514309   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:23.282735   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.040904965s)
	W0421 19:45:23.282780   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:23.260163    6681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:23Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:23.260163    6681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:23Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:25.783206   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:45:25.798486   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:45:25.798545   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:45:25.839755   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:25.839777   55975 cri.go:89] found id: ""
	I0421 19:45:25.839785   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:45:25.839829   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:25.844657   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:45:25.844716   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:45:25.884698   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:25.884722   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:25.884726   55975 cri.go:89] found id: ""
	I0421 19:45:25.884733   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:45:25.884796   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:25.889657   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:25.894049   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:45:25.894115   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:45:25.936277   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:25.936306   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:25.936310   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:25.936314   55975 cri.go:89] found id: ""
	I0421 19:45:25.936325   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:45:25.936388   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:25.941297   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:25.945923   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:25.950326   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:45:25.950394   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:45:25.991024   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:25.991051   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:25.991056   55975 cri.go:89] found id: ""
	I0421 19:45:25.991064   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:45:25.991122   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:25.995661   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:26.000474   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:45:26.000527   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:45:26.039075   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:26.039108   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:26.039115   55975 cri.go:89] found id: ""
	I0421 19:45:26.039124   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:45:26.039178   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:26.047451   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:26.051769   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:45:26.051822   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:45:26.089437   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:26.089460   55975 cri.go:89] found id: ""
	I0421 19:45:26.089467   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:45:26.089511   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:26.094073   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:45:26.094152   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:45:26.141387   55975 cri.go:89] found id: ""
	I0421 19:45:26.141419   55975 logs.go:276] 0 containers: []
	W0421 19:45:26.141429   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:45:26.141437   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:45:26.141506   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:45:26.182330   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:26.182356   55975 cri.go:89] found id: ""
	I0421 19:45:26.182364   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:45:26.182413   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:26.187240   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:45:26.187265   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:23.586313   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:28.223321   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.036034066s)
	W0421 19:45:28.223374   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:28.200663    6754 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:28Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:28.200663    6754 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:28Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:28.223383   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:45:28.223398   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:28.299622   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:45:28.299658   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:28.345556   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:45:28.345583   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:28.396381   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:45:28.396411   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:28.435839   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:45:28.435865   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:28.474915   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:45:28.474943   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:45:28.549910   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:45:28.549932   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:45:28.549946   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:28.602824   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:45:28.602855   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:28.644800   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:45:28.644830   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:28.702297   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:45:28.702333   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:30.742313   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.039957785s)
	W0421 19:45:30.742362   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:30.719251    6816 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:30Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:30.719251    6816 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:30Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:30.742371   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:45:30.742386   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:45:31.144221   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:45:31.144260   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:45:31.194520   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:45:31.194550   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:45:31.284034   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:45:31.284074   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:45:31.300809   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:45:31.300838   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:31.348390   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:45:31.348418   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:29.666323   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:32.738327   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:33.888351   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:45:33.903546   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:45:33.903616   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:45:33.947568   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:33.947591   55975 cri.go:89] found id: ""
	I0421 19:45:33.947599   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:45:33.947657   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:33.952054   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:45:33.952111   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:45:33.992183   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:33.992206   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:33.992210   55975 cri.go:89] found id: ""
	I0421 19:45:33.992217   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:45:33.992273   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:33.996848   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.001051   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:45:34.001102   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:45:34.039463   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:34.039488   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:34.039493   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:34.039498   55975 cri.go:89] found id: ""
	I0421 19:45:34.039507   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:45:34.039557   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.044118   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.048485   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.052762   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:45:34.052821   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:45:34.095887   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:34.095909   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:34.095914   55975 cri.go:89] found id: ""
	I0421 19:45:34.095923   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:45:34.095983   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.100455   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.104401   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:45:34.104447   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:45:34.145040   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:34.145059   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:34.145063   55975 cri.go:89] found id: ""
	I0421 19:45:34.145069   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:45:34.145113   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.149908   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.154338   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:45:34.154405   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:45:34.195358   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:34.195383   55975 cri.go:89] found id: ""
	I0421 19:45:34.195391   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:45:34.195441   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.200092   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:45:34.200151   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:45:34.240938   55975 cri.go:89] found id: ""
	I0421 19:45:34.240967   55975 logs.go:276] 0 containers: []
	W0421 19:45:34.240982   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:45:34.240990   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:45:34.241059   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:45:34.285027   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:34.285050   55975 cri.go:89] found id: ""
	I0421 19:45:34.285056   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:45:34.285098   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:34.290305   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:45:34.290327   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:45:34.388360   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:45:34.388401   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:36.424949   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.03653098s)
	W0421 19:45:36.424999   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:36.401612    6919 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:36Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:36.401612    6919 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:36Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:36.425010   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:45:36.425023   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:36.463796   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:45:36.463828   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:45:36.865125   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:45:36.865169   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:45:36.920623   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:45:36.920658   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:45:37.004602   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:45:37.004623   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:45:37.004639   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:37.058688   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:45:37.058718   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:37.134016   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:45:37.134049   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:37.183560   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:45:37.183601   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:37.243770   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:45:37.243800   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:39.287622   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.043802203s)
	W0421 19:45:39.287680   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:39.264005    6978 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:39Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:39.264005    6978 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:39Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:39.287688   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:45:39.287698   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:39.328258   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:45:39.328299   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:45:39.344638   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:45:39.344673   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:39.398626   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:45:39.398658   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:39.445470   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:45:39.445524   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:39.488825   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:45:39.488865   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:42.034308   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:45:42.053434   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:45:42.053489   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:45:42.093804   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:42.093837   55975 cri.go:89] found id: ""
	I0421 19:45:42.093848   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:45:42.093906   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.098672   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:45:42.098744   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:45:42.141441   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:42.141466   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:42.141471   55975 cri.go:89] found id: ""
	I0421 19:45:42.141480   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:45:42.141535   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.147288   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.151901   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:45:42.151960   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:45:42.193018   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:42.193040   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:42.193049   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:42.193052   55975 cri.go:89] found id: ""
	I0421 19:45:42.193058   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:45:42.193104   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.198342   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.203320   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.207569   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:45:42.207630   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:45:42.248209   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:42.248230   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:42.248233   55975 cri.go:89] found id: ""
	I0421 19:45:42.248240   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:45:42.248292   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.253052   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.257749   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:45:42.257797   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:45:42.299264   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:42.299286   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:42.299290   55975 cri.go:89] found id: ""
	I0421 19:45:42.299296   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:45:42.299344   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.303980   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.308580   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:45:42.308672   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:45:42.348490   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:42.348514   55975 cri.go:89] found id: ""
	I0421 19:45:42.348527   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:45:42.348601   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.353125   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:45:42.353202   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:45:42.392508   55975 cri.go:89] found id: ""
	I0421 19:45:42.392539   55975 logs.go:276] 0 containers: []
	W0421 19:45:42.392548   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:45:42.392554   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:45:42.392628   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:45:42.438969   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:42.439001   55975 cri.go:89] found id: ""
	I0421 19:45:42.439010   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:45:42.439062   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:42.443582   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:45:42.443604   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:38.818323   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:41.890279   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:44.489041   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.045419462s)
	W0421 19:45:44.489079   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:44.465120    7086 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:44Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:44.465120    7086 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:44Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:44.489086   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:45:44.489097   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:44.534280   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:45:44.534310   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:44.580762   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:45:44.580792   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:44.625409   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:45:44.625442   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:44.678700   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:45:44.678734   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:44.748751   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:45:44.748793   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:44.787125   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:45:44.787159   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:45:44.857476   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:45:44.857499   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:45:44.857512   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:44.902241   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:45:44.902272   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:46.944739   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.04244459s)
	W0421 19:45:46.944795   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:46.920902    7143 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:46Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:46.920902    7143 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:46Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:46.944806   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:45:46.944821   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:46.985677   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:45:46.985703   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:45:47.360819   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:45:47.360857   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:45:47.413363   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:45:47.413389   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:45:47.500682   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:45:47.500718   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:45:47.517421   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:45:47.517448   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:47.560115   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:45:47.560150   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:47.970333   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:50.120329   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:45:50.134949   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:45:50.135019   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:45:50.183081   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:50.183100   55975 cri.go:89] found id: ""
	I0421 19:45:50.183108   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:45:50.183166   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.187879   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:45:50.187935   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:45:50.223002   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:50.223028   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:50.223032   55975 cri.go:89] found id: ""
	I0421 19:45:50.223040   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:45:50.223085   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.227329   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.231384   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:45:50.231428   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:45:50.273361   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:50.273379   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:50.273383   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:50.273386   55975 cri.go:89] found id: ""
	I0421 19:45:50.273392   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:45:50.273440   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.277826   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.282456   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.286808   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:45:50.286856   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:45:50.322902   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:50.322925   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:50.322929   55975 cri.go:89] found id: ""
	I0421 19:45:50.322935   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:45:50.322983   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.327300   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.331314   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:45:50.331371   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:45:50.369763   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:50.369790   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:50.369796   55975 cri.go:89] found id: ""
	I0421 19:45:50.369804   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:45:50.369862   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.374354   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.378614   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:45:50.378699   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:45:50.422163   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:50.422187   55975 cri.go:89] found id: ""
	I0421 19:45:50.422195   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:45:50.422239   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.426763   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:45:50.426817   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:45:50.463936   55975 cri.go:89] found id: ""
	I0421 19:45:50.463972   55975 logs.go:276] 0 containers: []
	W0421 19:45:50.463981   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:45:50.463991   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:45:50.464042   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:45:50.517363   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:50.517388   55975 cri.go:89] found id: ""
	I0421 19:45:50.517396   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:45:50.517449   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:50.521928   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:45:50.521954   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:50.572652   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:45:50.572682   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:50.612310   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:45:50.612355   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:45:51.026914   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:45:51.026953   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:51.071882   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:45:51.071913   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:51.110904   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:45:51.110934   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:51.166632   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:45:51.166668   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:51.211312   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:45:51.211342   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:51.290162   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:45:51.290198   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:51.042369   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:53.331614   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.041393206s)
	W0421 19:45:53.331682   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:53.307181    7296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:53Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:53.307181    7296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:45:53Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:53.331707   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:45:53.331720   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:53.371351   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:45:53.371380   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:45:53.422740   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:45:53.422767   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:45:53.518821   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:45:53.518861   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:53.569342   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:45:53.569376   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:53.617140   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:45:53.617168   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:45:53.633002   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:45:53.633031   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:45:53.704821   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:45:53.704847   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:45:53.704866   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:55.744962   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.040074989s)
	W0421 19:45:55.745012   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:45:55.720551    7343 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:55Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:45:55.720551    7343 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:45:55Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:45:57.122313   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:45:58.245863   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:45:58.261583   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:45:58.261652   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:45:58.301867   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:45:58.301894   55975 cri.go:89] found id: ""
	I0421 19:45:58.301903   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:45:58.301957   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.306733   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:45:58.306783   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:45:58.343879   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:58.343907   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:45:58.343914   55975 cri.go:89] found id: ""
	I0421 19:45:58.343923   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:45:58.343966   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.348291   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.352622   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:45:58.352669   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:45:58.396077   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:45:58.396097   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:58.396101   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:45:58.396104   55975 cri.go:89] found id: ""
	I0421 19:45:58.396117   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:45:58.396162   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.400436   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.404536   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.408657   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:45:58.408715   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:45:58.447850   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:45:58.447877   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:45:58.447883   55975 cri.go:89] found id: ""
	I0421 19:45:58.447893   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:45:58.447953   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.452564   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.457565   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:45:58.457637   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:45:58.502253   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:45:58.502288   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:45:58.502299   55975 cri.go:89] found id: ""
	I0421 19:45:58.502307   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:45:58.502363   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.506716   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.510793   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:45:58.510842   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:45:58.548462   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:45:58.548483   55975 cri.go:89] found id: ""
	I0421 19:45:58.548490   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:45:58.548535   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.552846   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:45:58.552909   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:45:58.588236   55975 cri.go:89] found id: ""
	I0421 19:45:58.588263   55975 logs.go:276] 0 containers: []
	W0421 19:45:58.588271   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:45:58.588277   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:45:58.588321   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:45:58.626941   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:45:58.626970   55975 cri.go:89] found id: ""
	I0421 19:45:58.626980   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:45:58.627036   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:45:58.631477   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:45:58.631504   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:45:58.680752   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:45:58.680780   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:45:58.775695   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:45:58.775733   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:45:58.792480   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:45:58.792509   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:45:58.839880   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:45:58.839914   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:45:58.878230   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:45:58.878265   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:00.922739   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.04444951s)
	W0421 19:46:00.922775   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:00.898076    7441 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:00Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:00.898076    7441 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:00Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:00.922783   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:46:00.922794   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:00.996449   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:46:00.996490   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:46:01.417255   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:46:01.417292   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:00.194377   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:03.458609   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.041294049s)
	W0421 19:46:03.458652   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:03.433761    7459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:03Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:03.433761    7459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:03Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:03.458661   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:46:03.458670   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:46:03.532929   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:46:03.532954   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:46:03.532970   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:03.577037   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:46:03.577066   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:03.626039   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:46:03.626079   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:03.664967   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:46:03.664996   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:03.713533   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:46:03.713565   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:03.764114   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:46:03.764159   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:03.802181   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:46:03.802210   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:06.340928   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:46:06.356550   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:46:06.356622   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:46:06.400266   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:06.400296   55975 cri.go:89] found id: ""
	I0421 19:46:06.400305   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:46:06.400362   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.404970   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:46:06.405041   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:46:06.443823   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:06.443853   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:06.443860   55975 cri.go:89] found id: ""
	I0421 19:46:06.443872   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:46:06.443930   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.448567   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.452891   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:46:06.452943   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:46:06.491318   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:06.491344   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:06.491350   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:06.491354   55975 cri.go:89] found id: ""
	I0421 19:46:06.491363   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:46:06.491415   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.496054   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.500315   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.504575   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:46:06.504630   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:46:06.549283   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:06.549311   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:06.549316   55975 cri.go:89] found id: ""
	I0421 19:46:06.549323   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:46:06.549379   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.554733   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.559219   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:46:06.559268   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:46:06.598581   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:06.598601   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:06.598610   55975 cri.go:89] found id: ""
	I0421 19:46:06.598617   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:46:06.598683   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.603190   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.607926   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:46:06.607981   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:46:06.647399   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:06.647427   55975 cri.go:89] found id: ""
	I0421 19:46:06.647435   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:46:06.647481   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.651951   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:46:06.652010   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:46:06.691542   55975 cri.go:89] found id: ""
	I0421 19:46:06.691569   55975 logs.go:276] 0 containers: []
	W0421 19:46:06.691576   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:46:06.691581   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:46:06.691631   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:46:06.732532   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:06.732553   55975 cri.go:89] found id: ""
	I0421 19:46:06.732560   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:46:06.732601   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:06.737065   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:46:06.737087   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:06.785455   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:46:06.785481   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:06.831172   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:46:06.831205   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:06.869500   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:46:06.869537   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:46:07.276617   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:46:07.276654   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:46:07.292935   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:46:07.292962   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:46:07.365263   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:46:07.365291   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:46:07.365310   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:07.407817   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:46:07.407846   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:46:07.503989   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:46:07.504023   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:06.274288   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:09.543492   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.039447612s)
	W0421 19:46:09.543537   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:09.518112    7619 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:09Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:09.518112    7619 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:09Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:09.543548   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:46:09.543560   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:09.631273   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:46:09.631309   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:09.676456   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:46:09.676495   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:09.717027   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:46:09.717054   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:46:09.765679   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:46:09.765712   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:09.817011   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:46:09.817040   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:09.872233   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:46:09.872269   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:11.912604   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.040313133s)
	W0421 19:46:11.912643   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:11.887423    7669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:11Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:11.887423    7669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:11Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:11.912651   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:46:11.912664   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:09.346357   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:14.455542   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:46:14.472154   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:46:14.472233   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:46:14.510933   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:14.510960   55975 cri.go:89] found id: ""
	I0421 19:46:14.510970   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:46:14.511031   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.515682   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:46:14.515740   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:46:14.567000   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:14.567030   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:14.567036   55975 cri.go:89] found id: ""
	I0421 19:46:14.567046   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:46:14.567099   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.571567   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.576086   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:46:14.576135   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:46:14.615131   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:14.615150   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:14.615153   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:14.615156   55975 cri.go:89] found id: ""
	I0421 19:46:14.615163   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:46:14.615209   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.620111   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.624519   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.628754   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:46:14.628802   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:46:14.669253   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:14.669283   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:14.669288   55975 cri.go:89] found id: ""
	I0421 19:46:14.669295   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:46:14.669337   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.674116   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.678529   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:46:14.678583   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:46:14.717475   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:14.717500   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:14.717510   55975 cri.go:89] found id: ""
	I0421 19:46:14.717518   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:46:14.717567   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.722394   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.726890   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:46:14.726950   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:46:14.768988   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:14.769006   55975 cri.go:89] found id: ""
	I0421 19:46:14.769015   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:46:14.769065   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.773776   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:46:14.773840   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:46:14.813163   55975 cri.go:89] found id: ""
	I0421 19:46:14.813192   55975 logs.go:276] 0 containers: []
	W0421 19:46:14.813202   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:46:14.813208   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:46:14.813265   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:46:14.851492   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:14.851525   55975 cri.go:89] found id: ""
	I0421 19:46:14.851531   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:46:14.851590   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:14.857441   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:46:14.857471   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:14.898207   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:46:14.898248   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:14.943862   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:46:14.943899   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:14.991199   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:46:14.991231   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:15.037089   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:46:15.037122   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:15.082141   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:46:15.082173   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:17.119082   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.036889598s)
	W0421 19:46:17.119128   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:17.093527    7785 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:17Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:17.093527    7785 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:17Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:17.119138   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:46:17.119151   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:17.158970   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:46:17.159000   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:17.199431   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:46:17.199457   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:46:17.586558   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:46:17.586593   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:46:15.426262   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:17.682082   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:46:17.682120   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:46:17.753445   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:46:17.753469   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:46:17.753482   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:46:17.801952   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:46:17.801987   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:17.860472   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:46:17.860505   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:46:17.876038   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:46:17.876065   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:17.917778   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:46:17.917807   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:17.999686   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:46:17.999718   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:20.046449   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.046713377s)
	W0421 19:46:20.046486   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:20.020542    7844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:20Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:20.020542    7844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:20Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:22.547230   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:46:22.564061   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:46:22.564124   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:46:22.607278   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:22.607346   55975 cri.go:89] found id: ""
	I0421 19:46:22.607367   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:46:22.607417   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.612512   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:46:22.612570   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:46:18.498306   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:22.653313   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:22.653337   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:22.653344   55975 cri.go:89] found id: ""
	I0421 19:46:22.653352   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:46:22.653416   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.658601   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.663157   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:46:22.663216   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:46:22.702566   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:22.702599   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:22.702605   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:22.702609   55975 cri.go:89] found id: ""
	I0421 19:46:22.702619   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:46:22.702677   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.707491   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.711976   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.716552   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:46:22.716605   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:46:22.757253   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:22.757282   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:22.757287   55975 cri.go:89] found id: ""
	I0421 19:46:22.757297   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:46:22.757352   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.762980   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.767785   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:46:22.767861   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:46:22.810829   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:22.810851   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:22.810854   55975 cri.go:89] found id: ""
	I0421 19:46:22.810860   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:46:22.810915   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.815636   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.820048   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:46:22.820106   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:46:22.859626   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:22.859664   55975 cri.go:89] found id: ""
	I0421 19:46:22.859675   55975 logs.go:276] 1 containers: [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:46:22.859729   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.864607   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:46:22.864670   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:46:22.905179   55975 cri.go:89] found id: ""
	I0421 19:46:22.905212   55975 logs.go:276] 0 containers: []
	W0421 19:46:22.905226   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:46:22.905235   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:46:22.905293   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:46:22.948878   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:22.948906   55975 cri.go:89] found id: ""
	I0421 19:46:22.948923   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:46:22.948974   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:22.953762   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:46:22.953781   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:46:23.050997   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:46:23.051043   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:23.100469   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:46:23.100503   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:23.156833   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:46:23.156870   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:25.198786   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.041897862s)
	W0421 19:46:25.198828   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:25.172321    7935 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:25Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:25.172321    7935 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:25Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:25.198836   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:46:25.198849   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:27.244570   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.045701171s)
	W0421 19:46:27.244621   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:27.218182    7946 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:27Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:27.218182    7946 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:27Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:27.244632   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:46:27.244647   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:27.299283   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:46:27.299310   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:46:24.578334   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:27.650309   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:27.691796   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:46:27.691838   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:46:27.767889   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:46:27.767913   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:46:27.767928   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:27.819910   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:46:27.819945   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:27.864547   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:46:27.864580   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:46:27.908354   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:46:27.908384   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:46:27.923199   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:46:27.923230   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:27.969164   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:46:27.969195   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:28.009218   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:46:28.009267   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:28.063734   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:46:28.063767   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:28.105682   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:46:28.105711   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:30.687752   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:46:30.701955   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:46:30.702014   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:46:30.743001   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:30.743027   55975 cri.go:89] found id: ""
	I0421 19:46:30.743037   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:46:30.743087   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.747473   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:46:30.747537   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:46:30.786736   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:30.786760   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:30.786764   55975 cri.go:89] found id: ""
	I0421 19:46:30.786771   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:46:30.786827   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.791547   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.796132   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:46:30.796182   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:46:30.835998   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:30.836026   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:30.836032   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:30.836037   55975 cri.go:89] found id: ""
	I0421 19:46:30.836047   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:46:30.836107   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.840841   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.845285   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.849507   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:46:30.849564   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:46:30.889705   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:30.889728   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:30.889731   55975 cri.go:89] found id: ""
	I0421 19:46:30.889738   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:46:30.889787   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.894441   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.898555   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:46:30.898605   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:46:30.939167   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:30.939206   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:30.939211   55975 cri.go:89] found id: ""
	I0421 19:46:30.939220   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:46:30.939289   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.943928   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.948177   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:46:30.948240   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:46:30.994160   55975 cri.go:89] found id: "914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4"
	I0421 19:46:30.994188   55975 cri.go:89] found id: "d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:30.994193   55975 cri.go:89] found id: ""
	I0421 19:46:30.994201   55975 logs.go:276] 2 containers: [914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b]
	I0421 19:46:30.994252   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:30.998705   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:31.003153   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:46:31.003204   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:46:31.042318   55975 cri.go:89] found id: ""
	I0421 19:46:31.042343   55975 logs.go:276] 0 containers: []
	W0421 19:46:31.042354   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:46:31.042362   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:46:31.042416   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:46:31.096817   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:31.096845   55975 cri.go:89] found id: ""
	I0421 19:46:31.096854   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:46:31.096911   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:31.101474   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:46:31.101503   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:46:31.201422   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:46:31.201466   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:31.247172   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:46:31.247201   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:31.295350   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:46:31.295379   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:31.333146   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:46:31.333178   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:31.388034   55975 logs.go:123] Gathering logs for kube-controller-manager [914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4] ...
	I0421 19:46:31.388064   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4"
	I0421 19:46:31.428992   55975 logs.go:123] Gathering logs for kube-controller-manager [d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b] ...
	I0421 19:46:31.429021   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d078889cc9618433fe65902f3bcfc392a2f9a42e7813bf1f904a07b48c9add8b"
	I0421 19:46:31.467235   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:46:31.467262   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:46:31.826805   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:46:31.826841   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:46:31.847229   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:46:31.847261   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:46:31.926453   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:46:31.926479   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:46:31.926498   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:31.972380   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:46:31.972410   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:32.016683   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:46:32.016711   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:34.058877   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.042139497s)
	W0421 19:46:34.058936   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:34.032398    8191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:34Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:34.032398    8191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:34Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:34.058949   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:46:34.058963   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:34.098431   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:46:34.098461   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:34.143399   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:46:34.143427   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:34.220822   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:46:34.220865   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:36.262707   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.041818616s)
	W0421 19:46:36.262759   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:36.235624    8223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:36Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:36.235624    8223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:36Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:36.262770   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:46:36.262784   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:46:33.730286   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:36.802314   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:38.809527   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:46:38.825017   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:46:38.825094   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:46:38.866774   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:38.866805   55975 cri.go:89] found id: ""
	I0421 19:46:38.866816   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:46:38.866877   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:38.871712   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:46:38.871784   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:46:38.912558   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:38.912585   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:38.912590   55975 cri.go:89] found id: ""
	I0421 19:46:38.912598   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:46:38.912654   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:38.917653   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:38.922046   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:46:38.922131   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:46:38.958876   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:38.958907   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:38.958913   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:38.958918   55975 cri.go:89] found id: ""
	I0421 19:46:38.958927   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:46:38.958989   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:38.963904   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:38.967972   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:38.972366   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:46:38.972425   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:46:39.012829   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:39.012850   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:39.012855   55975 cri.go:89] found id: ""
	I0421 19:46:39.012861   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:46:39.012916   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:39.017731   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:39.022024   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:46:39.022103   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:46:39.065177   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:39.065199   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:39.065203   55975 cri.go:89] found id: ""
	I0421 19:46:39.065209   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:46:39.065258   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:39.069793   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:39.074021   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:46:39.074108   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:46:39.117388   55975 cri.go:89] found id: "914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4"
	I0421 19:46:39.117412   55975 cri.go:89] found id: ""
	I0421 19:46:39.117421   55975 logs.go:276] 1 containers: [914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4]
	I0421 19:46:39.117471   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:39.122053   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:46:39.122115   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:46:39.163926   55975 cri.go:89] found id: ""
	I0421 19:46:39.163953   55975 logs.go:276] 0 containers: []
	W0421 19:46:39.163961   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:46:39.163967   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:46:39.164024   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:46:39.201520   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:39.201543   55975 cri.go:89] found id: ""
	I0421 19:46:39.201550   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:46:39.201596   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:39.206151   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:46:39.206180   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:41.244403   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.038197269s)
	W0421 19:46:41.244449   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:41.217460    8326 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:41Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:41.217460    8326 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:41Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:41.244461   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:46:41.244474   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:41.290580   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:46:41.290613   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:41.336884   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:46:41.336913   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:41.380142   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:46:41.380169   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:41.427930   55975 logs.go:123] Gathering logs for kube-controller-manager [914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4] ...
	I0421 19:46:41.427967   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4"
	I0421 19:46:41.467597   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:46:41.467631   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:46:41.835878   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:46:41.835912   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:46:41.888911   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:46:41.888939   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:46:41.905822   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:46:41.905847   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:46:41.976662   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:46:41.976690   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:46:41.976709   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:42.022890   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:46:42.022921   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:46:42.117377   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:46:42.117413   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:42.155927   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:46:42.155951   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:42.203545   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:46:42.203579   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:42.243692   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:46:42.243729   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:42.882281   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:44.282304   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.038553514s)
	W0421 19:46:44.282358   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:44.255195    8400 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:44Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:44.255195    8400 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:44Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:44.282366   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:46:44.282377   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:46.861889   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:46:46.879130   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:46:46.879200   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:46:46.918958   55975 cri.go:89] found id: "4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:46.918988   55975 cri.go:89] found id: ""
	I0421 19:46:46.918998   55975 logs.go:276] 1 containers: [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9]
	I0421 19:46:46.919047   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:46.923728   55975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:46:46.923783   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:46:46.961669   55975 cri.go:89] found id: "adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:46.961696   55975 cri.go:89] found id: "5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:46.961700   55975 cri.go:89] found id: ""
	I0421 19:46:46.961706   55975 logs.go:276] 2 containers: [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca]
	I0421 19:46:46.961760   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:46.966890   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:46.971312   55975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:46:46.971366   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:46:47.010606   55975 cri.go:89] found id: "72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:47.010631   55975 cri.go:89] found id: "4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:47.010635   55975 cri.go:89] found id: "c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:47.010638   55975 cri.go:89] found id: ""
	I0421 19:46:47.010650   55975 logs.go:276] 3 containers: [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]
	I0421 19:46:47.010693   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.015453   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.020076   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.024259   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:46:47.024317   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:46:47.063617   55975 cri.go:89] found id: "288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:47.063641   55975 cri.go:89] found id: "10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:47.063646   55975 cri.go:89] found id: ""
	I0421 19:46:47.063654   55975 logs.go:276] 2 containers: [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5]
	I0421 19:46:47.063712   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.068426   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.073100   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:46:47.073157   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:46:47.111907   55975 cri.go:89] found id: "0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:47.111927   55975 cri.go:89] found id: "5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:47.111930   55975 cri.go:89] found id: ""
	I0421 19:46:47.111936   55975 logs.go:276] 2 containers: [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]
	I0421 19:46:47.111991   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.116608   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.121208   55975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:46:47.121262   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:46:47.162578   55975 cri.go:89] found id: "914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4"
	I0421 19:46:47.162602   55975 cri.go:89] found id: ""
	I0421 19:46:47.162611   55975 logs.go:276] 1 containers: [914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4]
	I0421 19:46:47.162668   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.167497   55975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:46:47.167564   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:46:47.208395   55975 cri.go:89] found id: ""
	I0421 19:46:47.208421   55975 logs.go:276] 0 containers: []
	W0421 19:46:47.208430   55975 logs.go:278] No container was found matching "kindnet"
	I0421 19:46:47.208435   55975 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0421 19:46:47.208484   55975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0421 19:46:47.252017   55975 cri.go:89] found id: "af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:47.252041   55975 cri.go:89] found id: ""
	I0421 19:46:47.252051   55975 logs.go:276] 1 containers: [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92]
	I0421 19:46:47.252107   55975 ssh_runner.go:195] Run: which crictl
	I0421 19:46:47.256855   55975 logs.go:123] Gathering logs for dmesg ...
	I0421 19:46:47.256878   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:46:47.272985   55975 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:46:47.273020   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:46:47.346173   55975 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:46:47.346202   55975 logs.go:123] Gathering logs for kube-apiserver [4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9] ...
	I0421 19:46:47.346215   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e65619e37e5c0f26a245b55f3dd12334b7a9f554944679cb6daded3fd2cbba9"
	I0421 19:46:47.408397   55975 logs.go:123] Gathering logs for etcd [5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca] ...
	I0421 19:46:47.408424   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dded63496e6552be54578f0167df04da094f2db28259ace1a7484658c1db0ca"
	I0421 19:46:47.449333   55975 logs.go:123] Gathering logs for coredns [4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23] ...
	I0421 19:46:47.449364   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4112c4866cd6d36e1861c1b64374c71c1e84f56b748afd3d255b6cf468431a23"
	I0421 19:46:47.492658   55975 logs.go:123] Gathering logs for kube-scheduler [10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5] ...
	I0421 19:46:47.492687   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10aa12077bd99bb61f084ee00cc2d3d95749698058874cc76082fdb727c24ff5"
	I0421 19:46:47.532435   55975 logs.go:123] Gathering logs for kube-proxy [0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29] ...
	I0421 19:46:47.532467   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c60ee20a985c6fc1872827b16dc5bc6bd7858947c4e3b9966cbfc71c1e68e29"
	I0421 19:46:47.581864   55975 logs.go:123] Gathering logs for coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2] ...
	I0421 19:46:47.581894   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	I0421 19:46:45.954362   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:49.621800   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": (2.039884421s)
	W0421 19:46:49.621845   55975 logs.go:130] failed coredns [c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:49.594217    8517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:49Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:49.594217    8517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c0345f0d0889626c017756c85bb6f883613e015ac89930ce3f1e3af36a1e39f2"
	time="2024-04-21T19:46:49Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:49.621857   55975 logs.go:123] Gathering logs for kube-controller-manager [914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4] ...
	I0421 19:46:49.621869   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 914d8cfae8ecffd897049632577a7d3d7ff6207910cc119292546fe510edd4d4"
	I0421 19:46:49.661270   55975 logs.go:123] Gathering logs for coredns [72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e] ...
	I0421 19:46:49.661299   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72564b8282db013714e5d94bb4a6840725b85be6060d5fa95d702fee8b055d9e"
	I0421 19:46:49.707908   55975 logs.go:123] Gathering logs for kube-scheduler [288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03] ...
	I0421 19:46:49.707936   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288160b130ab6578c5d7567071b9407602dba6de8f66d44eaf1ba53d96d32e03"
	I0421 19:46:49.787989   55975 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:46:49.788028   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:46:50.152744   55975 logs.go:123] Gathering logs for container status ...
	I0421 19:46:50.152786   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:46:50.202354   55975 logs.go:123] Gathering logs for kubelet ...
	I0421 19:46:50.202386   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:46:50.302212   55975 logs.go:123] Gathering logs for etcd [adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517] ...
	I0421 19:46:50.302249   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adb9dc822d4da87672463516d0645abe5353f2fd5c9edba4fbd803ebe015c517"
	I0421 19:46:50.352448   55975 logs.go:123] Gathering logs for kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2] ...
	I0421 19:46:50.352486   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	I0421 19:46:52.396038   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": (2.04353063s)
	W0421 19:46:52.396087   55975 logs.go:130] failed kube-proxy [5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2": Process exited with status 1
	stdout:
	
	stderr:
	E0421 19:46:52.368328    8565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:52Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	 output: 
	** stderr ** 
	E0421 19:46:52.368328    8565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5f2bbc2d689c7473fec8844f6e701ecf858514a29d09f208bb95409acd54eda2"
	time="2024-04-21T19:46:52Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	** /stderr **
	I0421 19:46:52.396098   55975 logs.go:123] Gathering logs for storage-provisioner [af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92] ...
	I0421 19:46:52.396112   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af3c7f9e4ce445c5435e35e50f0eaed3724772021002f45422bfb46661840c92"
	I0421 19:46:52.034385   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:46:54.938246   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:46:54.955198   55975 kubeadm.go:591] duration metric: took 4m29.911760589s to restartPrimaryControlPlane
	W0421 19:46:54.955286   55975 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0421 19:46:54.955323   55975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:46:55.106283   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:47:01.186343   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:47:04.258371   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:47:10.645360   55975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.690011153s)
	I0421 19:47:10.645450   55975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:47:10.663859   55975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:47:10.676072   55975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:47:10.687900   55975 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:47:10.687924   55975 kubeadm.go:156] found existing configuration files:
	
	I0421 19:47:10.687979   55975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:47:10.699240   55975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:47:10.699307   55975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:47:10.710625   55975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:47:10.721667   55975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:47:10.721753   55975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:47:10.733419   55975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:47:10.744819   55975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:47:10.744892   55975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:47:10.756682   55975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:47:10.767481   55975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:47:10.767531   55975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:47:10.778496   55975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:47:10.837402   55975 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:47:10.837485   55975 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:47:10.994243   55975 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:47:10.994367   55975 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:47:10.994551   55975 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:47:11.232918   55975 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:47:11.235113   55975 out.go:204]   - Generating certificates and keys ...
	I0421 19:47:11.235219   55975 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:47:11.235303   55975 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:47:11.235403   55975 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:47:11.235490   55975 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:47:11.235595   55975 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:47:11.235668   55975 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:47:11.235747   55975 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:47:11.235869   55975 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:47:11.236302   55975 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:47:11.236975   55975 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:47:11.237362   55975 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:47:11.237437   55975 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:47:11.290552   55975 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:47:11.408081   55975 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:47:11.519040   55975 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:47:11.615329   55975 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:47:11.676085   55975 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:47:11.676478   55975 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:47:11.679162   55975 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:47:11.681580   55975 out.go:204]   - Booting up control plane ...
	I0421 19:47:11.681797   55975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:47:11.681919   55975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:47:11.682019   55975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:47:11.701244   55975 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:47:11.702618   55975 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:47:11.702661   55975 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:47:11.843180   55975 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:47:11.843299   55975 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:47:12.344084   55975 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.218809ms
	I0421 19:47:12.344166   55975 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:47:10.338282   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:47:17.349602   55975 kubeadm.go:309] [api-check] The API server is healthy after 5.002719134s
	I0421 19:47:17.364853   55975 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:47:17.383825   55975 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:47:17.417358   55975 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:47:17.417626   55975 kubeadm.go:309] [mark-control-plane] Marking the node kubernetes-upgrade-595552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:47:17.437729   55975 kubeadm.go:309] [bootstrap-token] Using token: 5syp4n.pwgdazymox5wx71q
	I0421 19:47:17.439254   55975 out.go:204]   - Configuring RBAC rules ...
	I0421 19:47:17.439389   55975 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:47:17.445849   55975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:47:17.455882   55975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:47:17.459647   55975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:47:17.463528   55975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:47:17.467564   55975 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:47:13.410284   57617 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.23:22: connect: no route to host
	I0421 19:47:17.755499   55975 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:47:18.200287   55975 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:47:18.760126   55975 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:47:18.761396   55975 kubeadm.go:309] 
	I0421 19:47:18.761499   55975 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:47:18.761530   55975 kubeadm.go:309] 
	I0421 19:47:18.761625   55975 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:47:18.761636   55975 kubeadm.go:309] 
	I0421 19:47:18.761666   55975 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:47:18.761739   55975 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:47:18.761831   55975 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:47:18.761851   55975 kubeadm.go:309] 
	I0421 19:47:18.762002   55975 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:47:18.762023   55975 kubeadm.go:309] 
	I0421 19:47:18.762101   55975 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:47:18.762112   55975 kubeadm.go:309] 
	I0421 19:47:18.762159   55975 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:47:18.762230   55975 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:47:18.762328   55975 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:47:18.762337   55975 kubeadm.go:309] 
	I0421 19:47:18.762463   55975 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:47:18.762555   55975 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:47:18.762564   55975 kubeadm.go:309] 
	I0421 19:47:18.762678   55975 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5syp4n.pwgdazymox5wx71q \
	I0421 19:47:18.762825   55975 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 19:47:18.762865   55975 kubeadm.go:309] 	--control-plane 
	I0421 19:47:18.762880   55975 kubeadm.go:309] 
	I0421 19:47:18.762987   55975 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:47:18.763001   55975 kubeadm.go:309] 
	I0421 19:47:18.763119   55975 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5syp4n.pwgdazymox5wx71q \
	I0421 19:47:18.763276   55975 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 19:47:18.763708   55975 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:47:18.763806   55975 cni.go:84] Creating CNI manager for ""
	I0421 19:47:18.763817   55975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:47:18.765712   55975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 19:47:18.767074   55975 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 19:47:18.783236   55975 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 19:47:18.809278   55975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:47:18.809352   55975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:47:18.809401   55975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-595552 minikube.k8s.io/updated_at=2024_04_21T19_47_18_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=kubernetes-upgrade-595552 minikube.k8s.io/primary=true
	I0421 19:47:19.023419   55975 ops.go:34] apiserver oom_adj: -16
	I0421 19:47:19.023487   55975 kubeadm.go:1107] duration metric: took 214.201922ms to wait for elevateKubeSystemPrivileges
	W0421 19:47:19.023524   55975 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:47:19.023534   55975 kubeadm.go:393] duration metric: took 4m54.310537524s to StartCluster
	I0421 19:47:19.023558   55975 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:47:19.023653   55975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:47:19.025231   55975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:47:19.025490   55975 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.31 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:47:19.027180   55975 out.go:177] * Verifying Kubernetes components...
	I0421 19:47:19.025549   55975 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:47:19.025733   55975 config.go:182] Loaded profile config "kubernetes-upgrade-595552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:47:19.028532   55975 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-595552"
	I0421 19:47:19.028549   55975 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-595552"
	I0421 19:47:19.028576   55975 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-595552"
	W0421 19:47:19.028586   55975 addons.go:243] addon storage-provisioner should already be in state true
	I0421 19:47:19.028588   55975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-595552"
	I0421 19:47:19.028611   55975 host.go:66] Checking if "kubernetes-upgrade-595552" exists ...
	I0421 19:47:19.028538   55975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:47:19.028908   55975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:47:19.028935   55975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:47:19.028936   55975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:47:19.028984   55975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:47:19.044500   55975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0421 19:47:19.044980   55975 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:47:19.045522   55975 main.go:141] libmachine: Using API Version  1
	I0421 19:47:19.045551   55975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:47:19.045899   55975 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:47:19.046127   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetState
	I0421 19:47:19.046532   55975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0421 19:47:19.046899   55975 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:47:19.047413   55975 main.go:141] libmachine: Using API Version  1
	I0421 19:47:19.047436   55975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:47:19.047771   55975 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:47:19.048254   55975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:47:19.048280   55975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:47:19.049100   55975 kapi.go:59] client config for kubernetes-upgrade-595552: &rest.Config{Host:"https://192.168.72.31:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.crt", KeyFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kubernetes-upgrade-595552/client.key", CAFile:"/home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 19:47:19.049458   55975 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-595552"
	W0421 19:47:19.049481   55975 addons.go:243] addon default-storageclass should already be in state true
	I0421 19:47:19.049510   55975 host.go:66] Checking if "kubernetes-upgrade-595552" exists ...
	I0421 19:47:19.049873   55975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:47:19.049906   55975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:47:19.063031   55975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I0421 19:47:19.063566   55975 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:47:19.064122   55975 main.go:141] libmachine: Using API Version  1
	I0421 19:47:19.064142   55975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:47:19.064394   55975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0421 19:47:19.064470   55975 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:47:19.064657   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetState
	I0421 19:47:19.064732   55975 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:47:19.065200   55975 main.go:141] libmachine: Using API Version  1
	I0421 19:47:19.065235   55975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:47:19.065534   55975 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:47:19.065963   55975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:47:19.065999   55975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:47:19.066451   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:47:19.068526   55975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:47:19.070052   55975 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:47:19.070085   55975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:47:19.070103   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:47:19.073267   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:47:19.073845   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:47:19.073875   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:47:19.074086   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:47:19.074257   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:47:19.074412   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:47:19.074545   55975 sshutil.go:53] new ssh client: &{IP:192.168.72.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa Username:docker}
	I0421 19:47:19.081130   55975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0421 19:47:19.081474   55975 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:47:19.081902   55975 main.go:141] libmachine: Using API Version  1
	I0421 19:47:19.081916   55975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:47:19.082195   55975 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:47:19.082360   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetState
	I0421 19:47:19.083874   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .DriverName
	I0421 19:47:19.084111   55975 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:47:19.084130   55975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:47:19.084143   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHHostname
	I0421 19:47:19.086875   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:47:19.087302   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bd:15", ip: ""} in network mk-kubernetes-upgrade-595552: {Iface:virbr4 ExpiryTime:2024-04-21 20:34:30 +0000 UTC Type:0 Mac:52:54:00:8b:bd:15 Iaid: IPaddr:192.168.72.31 Prefix:24 Hostname:kubernetes-upgrade-595552 Clientid:01:52:54:00:8b:bd:15}
	I0421 19:47:19.087318   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | domain kubernetes-upgrade-595552 has defined IP address 192.168.72.31 and MAC address 52:54:00:8b:bd:15 in network mk-kubernetes-upgrade-595552
	I0421 19:47:19.087463   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHPort
	I0421 19:47:19.087622   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHKeyPath
	I0421 19:47:19.087752   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .GetSSHUsername
	I0421 19:47:19.087880   55975 sshutil.go:53] new ssh client: &{IP:192.168.72.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kubernetes-upgrade-595552/id_rsa Username:docker}
	I0421 19:47:19.231926   55975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:47:19.262742   55975 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:47:19.262835   55975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:47:19.284957   55975 api_server.go:72] duration metric: took 259.431046ms to wait for apiserver process to appear ...
	I0421 19:47:19.284986   55975 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:47:19.285006   55975 api_server.go:253] Checking apiserver healthz at https://192.168.72.31:8443/healthz ...
	I0421 19:47:19.292659   55975 api_server.go:279] https://192.168.72.31:8443/healthz returned 200:
	ok
	I0421 19:47:19.302550   55975 api_server.go:141] control plane version: v1.30.0
	I0421 19:47:19.302578   55975 api_server.go:131] duration metric: took 17.585097ms to wait for apiserver health ...
	I0421 19:47:19.302588   55975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:47:19.309804   55975 system_pods.go:59] 4 kube-system pods found
	I0421 19:47:19.309833   55975 system_pods.go:61] "etcd-kubernetes-upgrade-595552" [266626a1-d745-4423-8912-9f7cb779669d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0421 19:47:19.309841   55975 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-595552" [04a65662-e533-4074-894d-b69913e3b277] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0421 19:47:19.309849   55975 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-595552" [cfaf2e71-c5cb-47b9-a70b-d200ac26a9bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0421 19:47:19.309855   55975 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-595552" [e899f8df-9d01-4db2-9d7b-5c1c74acffee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0421 19:47:19.309861   55975 system_pods.go:74] duration metric: took 7.267751ms to wait for pod list to return data ...
	I0421 19:47:19.309870   55975 kubeadm.go:576] duration metric: took 284.351291ms to wait for: map[apiserver:true system_pods:true]
	I0421 19:47:19.309885   55975 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:47:19.312514   55975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:47:19.312539   55975 node_conditions.go:123] node cpu capacity is 2
	I0421 19:47:19.312549   55975 node_conditions.go:105] duration metric: took 2.660268ms to run NodePressure ...
	I0421 19:47:19.312559   55975 start.go:240] waiting for startup goroutines ...
	I0421 19:47:19.322907   55975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:47:19.445721   55975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:47:19.477871   55975 main.go:141] libmachine: Making call to close driver server
	I0421 19:47:19.477901   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .Close
	I0421 19:47:19.478226   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Closing plugin on server side
	I0421 19:47:19.478261   55975 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:47:19.478273   55975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:47:19.478283   55975 main.go:141] libmachine: Making call to close driver server
	I0421 19:47:19.478311   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .Close
	I0421 19:47:19.478565   55975 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:47:19.478590   55975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:47:19.478571   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Closing plugin on server side
	I0421 19:47:19.486559   55975 main.go:141] libmachine: Making call to close driver server
	I0421 19:47:19.486577   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .Close
	I0421 19:47:19.486927   55975 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:47:19.486946   55975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:47:19.830128   55975 main.go:141] libmachine: Making call to close driver server
	I0421 19:47:19.830152   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .Close
	I0421 19:47:19.830442   55975 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:47:19.830459   55975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:47:19.830468   55975 main.go:141] libmachine: Making call to close driver server
	I0421 19:47:19.830475   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) Calling .Close
	I0421 19:47:19.830683   55975 main.go:141] libmachine: (kubernetes-upgrade-595552) DBG | Closing plugin on server side
	I0421 19:47:19.830718   55975 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:47:19.830727   55975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:47:19.833067   55975 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0421 19:47:19.834433   55975 addons.go:505] duration metric: took 808.885054ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0421 19:47:19.834466   55975 start.go:245] waiting for cluster config update ...
	I0421 19:47:19.834476   55975 start.go:254] writing updated cluster config ...
	I0421 19:47:19.834663   55975 ssh_runner.go:195] Run: rm -f paused
	I0421 19:47:19.882524   55975 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:47:19.884406   55975 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-595552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.620296352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713728840620268199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d3ba62f-1b61-4f8f-a1d8-c919f8c63e9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.622081314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dd8e44b-0c7d-48df-b290-957a01764373 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.622177976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dd8e44b-0c7d-48df-b290-957a01764373 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.622446253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e1acd6c399d8a576cb0847ac6b2c589d755f8f2fdd4659eb6a95c61895a0bde,PodSandboxId:bbe6a91316b2e2e304ac77a3124adb71e74495f05aab3b7006bc29b1fef01910,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713728833061427833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f4d40bb7ad40cfc9572568181d72a3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01944c06c646264c80d6bc4b2c7bf273264c89a1a8a5fd479c17eb0591b33dbe,PodSandboxId:49f94f59a2d7d5761b2267ecb8787f6d2f8173d1a3cff741166457e46b7e1360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713728832980221486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42304f55bad75339c6d390c2557ca252,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5172ba2c0552b01a7ca730384a5ab43b367cccd492305cd3a58f4e1ac5ffe,PodSandboxId:210bf1fde0f16f6bf9f9a1bff80a7b5b1a7b2296734b72688b950cbfd769b52b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713728832994037733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c826ef356d13a8eeee425b03d342c88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6088c795,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20d8d4ea3cecef8e84894c4301cc7b32676cdb9d0459d9ddaa6856c038e4cea,PodSandboxId:ec5a80fb578fd42b6455bb8a475a05b7abc83d1fb3a80b849f2b95be5144ca6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713728832901981896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71da9094d09a86086e32f8da7bd3dff,},Annotations:map[string]string{io.kubernetes.container.hash: 2fbd3819,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dd8e44b-0c7d-48df-b290-957a01764373 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.665356363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbbcc6d4-1fe4-4fb6-a431-98af04485583 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.665481004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbbcc6d4-1fe4-4fb6-a431-98af04485583 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.675739303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ac38bd9-f4bd-47b4-b6b5-5573714c32b1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.676215763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713728840676192768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ac38bd9-f4bd-47b4-b6b5-5573714c32b1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.677450889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=015f5c89-fcdc-4429-955f-5598551b376f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.677733885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=015f5c89-fcdc-4429-955f-5598551b376f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.678359295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e1acd6c399d8a576cb0847ac6b2c589d755f8f2fdd4659eb6a95c61895a0bde,PodSandboxId:bbe6a91316b2e2e304ac77a3124adb71e74495f05aab3b7006bc29b1fef01910,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713728833061427833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f4d40bb7ad40cfc9572568181d72a3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01944c06c646264c80d6bc4b2c7bf273264c89a1a8a5fd479c17eb0591b33dbe,PodSandboxId:49f94f59a2d7d5761b2267ecb8787f6d2f8173d1a3cff741166457e46b7e1360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713728832980221486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42304f55bad75339c6d390c2557ca252,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5172ba2c0552b01a7ca730384a5ab43b367cccd492305cd3a58f4e1ac5ffe,PodSandboxId:210bf1fde0f16f6bf9f9a1bff80a7b5b1a7b2296734b72688b950cbfd769b52b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713728832994037733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c826ef356d13a8eeee425b03d342c88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6088c795,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20d8d4ea3cecef8e84894c4301cc7b32676cdb9d0459d9ddaa6856c038e4cea,PodSandboxId:ec5a80fb578fd42b6455bb8a475a05b7abc83d1fb3a80b849f2b95be5144ca6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713728832901981896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71da9094d09a86086e32f8da7bd3dff,},Annotations:map[string]string{io.kubernetes.container.hash: 2fbd3819,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=015f5c89-fcdc-4429-955f-5598551b376f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.720509171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fa9ab6e-0b66-4705-a7a2-d8523e57acd9 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.720592839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fa9ab6e-0b66-4705-a7a2-d8523e57acd9 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.723906513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=091dce0d-26dd-4f01-8706-4c586d016f78 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.725464181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713728840725336111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=091dce0d-26dd-4f01-8706-4c586d016f78 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.726557481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cbdae37-d3d9-4aa0-812c-51789dbf422a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.726614399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cbdae37-d3d9-4aa0-812c-51789dbf422a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.726766969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e1acd6c399d8a576cb0847ac6b2c589d755f8f2fdd4659eb6a95c61895a0bde,PodSandboxId:bbe6a91316b2e2e304ac77a3124adb71e74495f05aab3b7006bc29b1fef01910,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713728833061427833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f4d40bb7ad40cfc9572568181d72a3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01944c06c646264c80d6bc4b2c7bf273264c89a1a8a5fd479c17eb0591b33dbe,PodSandboxId:49f94f59a2d7d5761b2267ecb8787f6d2f8173d1a3cff741166457e46b7e1360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713728832980221486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42304f55bad75339c6d390c2557ca252,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5172ba2c0552b01a7ca730384a5ab43b367cccd492305cd3a58f4e1ac5ffe,PodSandboxId:210bf1fde0f16f6bf9f9a1bff80a7b5b1a7b2296734b72688b950cbfd769b52b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713728832994037733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c826ef356d13a8eeee425b03d342c88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6088c795,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20d8d4ea3cecef8e84894c4301cc7b32676cdb9d0459d9ddaa6856c038e4cea,PodSandboxId:ec5a80fb578fd42b6455bb8a475a05b7abc83d1fb3a80b849f2b95be5144ca6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713728832901981896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71da9094d09a86086e32f8da7bd3dff,},Annotations:map[string]string{io.kubernetes.container.hash: 2fbd3819,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cbdae37-d3d9-4aa0-812c-51789dbf422a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.769286535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5097a40-4dd7-4950-90cc-cd016a4603dd name=/runtime.v1.RuntimeService/Version
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.769391351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5097a40-4dd7-4950-90cc-cd016a4603dd name=/runtime.v1.RuntimeService/Version
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.771425751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae3405f3-8962-49f3-8716-b329322e2dad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.771880099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713728840771801703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae3405f3-8962-49f3-8716-b329322e2dad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.774539114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b08aea8-cf9e-4808-b856-e5bbfe30b93f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.774635912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b08aea8-cf9e-4808-b856-e5bbfe30b93f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:47:20 kubernetes-upgrade-595552 crio[3006]: time="2024-04-21 19:47:20.775015940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e1acd6c399d8a576cb0847ac6b2c589d755f8f2fdd4659eb6a95c61895a0bde,PodSandboxId:bbe6a91316b2e2e304ac77a3124adb71e74495f05aab3b7006bc29b1fef01910,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713728833061427833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f4d40bb7ad40cfc9572568181d72a3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 4,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01944c06c646264c80d6bc4b2c7bf273264c89a1a8a5fd479c17eb0591b33dbe,PodSandboxId:49f94f59a2d7d5761b2267ecb8787f6d2f8173d1a3cff741166457e46b7e1360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713728832980221486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42304f55bad75339c6d390c2557ca252,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5172ba2c0552b01a7ca730384a5ab43b367cccd492305cd3a58f4e1ac5ffe,PodSandboxId:210bf1fde0f16f6bf9f9a1bff80a7b5b1a7b2296734b72688b950cbfd769b52b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713728832994037733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c826ef356d13a8eeee425b03d342c88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6088c795,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20d8d4ea3cecef8e84894c4301cc7b32676cdb9d0459d9ddaa6856c038e4cea,PodSandboxId:ec5a80fb578fd42b6455bb8a475a05b7abc83d1fb3a80b849f2b95be5144ca6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713728832901981896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-595552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71da9094d09a86086e32f8da7bd3dff,},Annotations:map[string]string{io.kubernetes.container.hash: 2fbd3819,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b08aea8-cf9e-4808-b856-e5bbfe30b93f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e1acd6c399d8       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   7 seconds ago       Running             kube-scheduler            4                   bbe6a91316b2e       kube-scheduler-kubernetes-upgrade-595552
	31a5172ba2c05       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago       Running             etcd                      3                   210bf1fde0f16       etcd-kubernetes-upgrade-595552
	01944c06c6462       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   7 seconds ago       Running             kube-controller-manager   8                   49f94f59a2d7d       kube-controller-manager-kubernetes-upgrade-595552
	d20d8d4ea3cec       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   7 seconds ago       Running             kube-apiserver            1                   ec5a80fb578fd       kube-apiserver-kubernetes-upgrade-595552
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-595552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-595552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=kubernetes-upgrade-595552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_47_18_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:47:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-595552
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:47:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:47:18 +0000   Sun, 21 Apr 2024 19:47:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:47:18 +0000   Sun, 21 Apr 2024 19:47:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:47:18 +0000   Sun, 21 Apr 2024 19:47:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:47:18 +0000   Sun, 21 Apr 2024 19:47:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.31
	  Hostname:    kubernetes-upgrade-595552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e71564a900448b8a63ed3b7ac3c89cf
	  System UUID:                6e71564a-9004-48b8-a63e-d3b7ac3c89cf
	  Boot ID:                    f5a2b093-ea1c-48e7-9c46-7ab3b8290033
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-595552                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2s
	  kube-system                 kube-apiserver-kubernetes-upgrade-595552             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-595552    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-scheduler-kubernetes-upgrade-595552             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 2s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  2s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2s    kubelet  Node kubernetes-upgrade-595552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s    kubelet  Node kubernetes-upgrade-595552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s    kubelet  Node kubernetes-upgrade-595552 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.130792] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.309592] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +5.102506] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.065734] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.086431] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[Apr21 19:40] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.168674] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +5.270468] kauditd_printk_skb: 15 callbacks suppressed
	[ +33.564493] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.425417] systemd-fstab-generator[2305]: Ignoring "noauto" option for root device
	[  +0.205752] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.576337] systemd-fstab-generator[2529]: Ignoring "noauto" option for root device
	[  +0.316985] systemd-fstab-generator[2634]: Ignoring "noauto" option for root device
	[  +0.868660] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[Apr21 19:42] systemd-fstab-generator[3169]: Ignoring "noauto" option for root device
	[  +0.092991] kauditd_printk_skb: 197 callbacks suppressed
	[  +5.095995] kauditd_printk_skb: 85 callbacks suppressed
	[ +18.660812] systemd-fstab-generator[4098]: Ignoring "noauto" option for root device
	[  +7.895546] kauditd_printk_skb: 15 callbacks suppressed
	[Apr21 19:43] kauditd_printk_skb: 11 callbacks suppressed
	[Apr21 19:47] kauditd_printk_skb: 15 callbacks suppressed
	[  +1.200058] systemd-fstab-generator[10035]: Ignoring "noauto" option for root device
	[  +6.059261] systemd-fstab-generator[10358]: Ignoring "noauto" option for root device
	[  +0.087067] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.203281] systemd-fstab-generator[10427]: Ignoring "noauto" option for root device
	
	
	==> etcd [31a5172ba2c0552b01a7ca730384a5ab43b367cccd492305cd3a58f4e1ac5ffe] <==
	{"level":"info","ts":"2024-04-21T19:47:13.457649Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-21T19:47:13.463149Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1f4c66c5cd1fd608","initial-advertise-peer-urls":["https://192.168.72.31:2380"],"listen-peer-urls":["https://192.168.72.31:2380"],"advertise-client-urls":["https://192.168.72.31:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.31:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-21T19:47:13.463317Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T19:47:13.463561Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.31:2380"}
	{"level":"info","ts":"2024-04-21T19:47:13.479171Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.31:2380"}
	{"level":"info","ts":"2024-04-21T19:47:13.472086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f4c66c5cd1fd608 switched to configuration voters=(2255290513141782024)"}
	{"level":"info","ts":"2024-04-21T19:47:13.479684Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a6ca7a512d47f257","local-member-id":"1f4c66c5cd1fd608","added-peer-id":"1f4c66c5cd1fd608","added-peer-peer-urls":["https://192.168.72.31:2380"]}
	{"level":"info","ts":"2024-04-21T19:47:14.09156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f4c66c5cd1fd608 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-21T19:47:14.091708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f4c66c5cd1fd608 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-21T19:47:14.091893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f4c66c5cd1fd608 received MsgPreVoteResp from 1f4c66c5cd1fd608 at term 1"}
	{"level":"info","ts":"2024-04-21T19:47:14.091966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f4c66c5cd1fd608 became candidate at term 2"}
	{"level":"info","ts":"2024-04-21T19:47:14.091999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f4c66c5cd1fd608 received MsgVoteResp from 1f4c66c5cd1fd608 at term 2"}
	{"level":"info","ts":"2024-04-21T19:47:14.092113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f4c66c5cd1fd608 became leader at term 2"}
	{"level":"info","ts":"2024-04-21T19:47:14.092198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1f4c66c5cd1fd608 elected leader 1f4c66c5cd1fd608 at term 2"}
	{"level":"info","ts":"2024-04-21T19:47:14.093909Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1f4c66c5cd1fd608","local-member-attributes":"{Name:kubernetes-upgrade-595552 ClientURLs:[https://192.168.72.31:2379]}","request-path":"/0/members/1f4c66c5cd1fd608/attributes","cluster-id":"a6ca7a512d47f257","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:47:14.09403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:47:14.094466Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:47:14.095296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:47:14.096184Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:47:14.096304Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T19:47:14.097422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T19:47:14.098172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.31:2379"}
	{"level":"info","ts":"2024-04-21T19:47:14.098645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a6ca7a512d47f257","local-member-id":"1f4c66c5cd1fd608","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:47:14.098969Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:47:14.099099Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:47:21 up 7 min,  0 users,  load average: 1.00, 0.47, 0.23
	Linux kubernetes-upgrade-595552 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d20d8d4ea3cecef8e84894c4301cc7b32676cdb9d0459d9ddaa6856c038e4cea] <==
	I0421 19:47:15.672638       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0421 19:47:15.672683       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0421 19:47:15.676820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 19:47:15.676929       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 19:47:15.677250       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0421 19:47:15.678216       1 controller.go:615] quota admission added evaluator for: namespaces
	I0421 19:47:15.679235       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0421 19:47:15.680489       1 aggregator.go:165] initial CRD sync complete...
	I0421 19:47:15.680500       1 autoregister_controller.go:141] Starting autoregister controller
	I0421 19:47:15.680506       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0421 19:47:15.680511       1 cache.go:39] Caches are synced for autoregister controller
	I0421 19:47:15.854936       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 19:47:16.484563       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0421 19:47:16.488951       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0421 19:47:16.488988       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0421 19:47:17.169247       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0421 19:47:17.224661       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0421 19:47:17.344073       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0421 19:47:17.363377       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.31]
	I0421 19:47:17.364646       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 19:47:17.374283       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0421 19:47:17.576553       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0421 19:47:18.143730       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0421 19:47:18.170634       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0421 19:47:18.198411       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [01944c06c646264c80d6bc4b2c7bf273264c89a1a8a5fd479c17eb0591b33dbe] <==
	I0421 19:47:20.276539       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0421 19:47:20.276696       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0421 19:47:20.276767       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0421 19:47:20.426787       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0421 19:47:20.427005       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0421 19:47:20.427043       1 shared_informer.go:313] Waiting for caches to sync for job
	I0421 19:47:20.577058       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0421 19:47:20.577226       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0421 19:47:20.577263       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0421 19:47:20.624176       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0421 19:47:20.624217       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0421 19:47:20.624287       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0421 19:47:20.624301       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0421 19:47:20.624309       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0421 19:47:20.675120       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0421 19:47:20.675174       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0421 19:47:20.675227       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0421 19:47:20.675282       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0421 19:47:20.675310       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0421 19:47:20.930479       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0421 19:47:20.930560       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0421 19:47:20.930571       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0421 19:47:21.076243       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0421 19:47:21.076337       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0421 19:47:21.076348       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	
	
	==> kube-scheduler [7e1acd6c399d8a576cb0847ac6b2c589d755f8f2fdd4659eb6a95c61895a0bde] <==
	W0421 19:47:15.621923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:47:15.621962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:47:15.621942       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:47:15.622151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:47:16.535324       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 19:47:16.535506       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 19:47:16.606689       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 19:47:16.606916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 19:47:16.636143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:47:16.636200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:47:16.636288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 19:47:16.636334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 19:47:16.728802       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 19:47:16.728984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 19:47:16.734788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 19:47:16.734951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 19:47:16.802815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:47:16.803540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:47:16.854012       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 19:47:16.854109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:47:16.913219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:47:16.913282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:47:17.082170       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:47:17.082231       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0421 19:47:19.606991       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.130293   10365 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.130401   10365 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.159260   10365 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.182494   10365 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.182611   10365 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.205103   10365 topology_manager.go:215] "Topology Admit Handler" podUID="c826ef356d13a8eeee425b03d342c88e" podNamespace="kube-system" podName="etcd-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.205332   10365 topology_manager.go:215] "Topology Admit Handler" podUID="a71da9094d09a86086e32f8da7bd3dff" podNamespace="kube-system" podName="kube-apiserver-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.205454   10365 topology_manager.go:215] "Topology Admit Handler" podUID="42304f55bad75339c6d390c2557ca252" podNamespace="kube-system" podName="kube-controller-manager-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.205512   10365 topology_manager.go:215] "Topology Admit Handler" podUID="a7f4d40bb7ad40cfc9572568181d72a3" podNamespace="kube-system" podName="kube-scheduler-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.249824   10365 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.250008   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7f4d40bb7ad40cfc9572568181d72a3-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-595552\" (UID: \"a7f4d40bb7ad40cfc9572568181d72a3\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.351599   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42304f55bad75339c6d390c2557ca252-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-595552\" (UID: \"42304f55bad75339c6d390c2557ca252\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.352459   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c826ef356d13a8eeee425b03d342c88e-etcd-data\") pod \"etcd-kubernetes-upgrade-595552\" (UID: \"c826ef356d13a8eeee425b03d342c88e\") " pod="kube-system/etcd-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.352945   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a71da9094d09a86086e32f8da7bd3dff-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-595552\" (UID: \"a71da9094d09a86086e32f8da7bd3dff\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.353247   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a71da9094d09a86086e32f8da7bd3dff-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-595552\" (UID: \"a71da9094d09a86086e32f8da7bd3dff\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.353535   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42304f55bad75339c6d390c2557ca252-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-595552\" (UID: \"42304f55bad75339c6d390c2557ca252\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.353786   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42304f55bad75339c6d390c2557ca252-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-595552\" (UID: \"42304f55bad75339c6d390c2557ca252\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.354042   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42304f55bad75339c6d390c2557ca252-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-595552\" (UID: \"42304f55bad75339c6d390c2557ca252\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.354205   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42304f55bad75339c6d390c2557ca252-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-595552\" (UID: \"42304f55bad75339c6d390c2557ca252\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.354380   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c826ef356d13a8eeee425b03d342c88e-etcd-certs\") pod \"etcd-kubernetes-upgrade-595552\" (UID: \"c826ef356d13a8eeee425b03d342c88e\") " pod="kube-system/etcd-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.354534   10365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a71da9094d09a86086e32f8da7bd3dff-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-595552\" (UID: \"a71da9094d09a86086e32f8da7bd3dff\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-595552"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.772461   10365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-595552" podStartSLOduration=0.772429065 podStartE2EDuration="772.429065ms" podCreationTimestamp="2024-04-21 19:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-21 19:47:18.752442944 +0000 UTC m=+0.831372443" watchObservedRunningTime="2024-04-21 19:47:18.772429065 +0000 UTC m=+0.851358559"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.798172   10365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-595552" podStartSLOduration=0.798152792 podStartE2EDuration="798.152792ms" podCreationTimestamp="2024-04-21 19:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-21 19:47:18.773673796 +0000 UTC m=+0.852603298" watchObservedRunningTime="2024-04-21 19:47:18.798152792 +0000 UTC m=+0.877082294"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.811342   10365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-595552" podStartSLOduration=0.811319218 podStartE2EDuration="811.319218ms" podCreationTimestamp="2024-04-21 19:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-21 19:47:18.798555796 +0000 UTC m=+0.877485280" watchObservedRunningTime="2024-04-21 19:47:18.811319218 +0000 UTC m=+0.890248714"
	Apr 21 19:47:18 kubernetes-upgrade-595552 kubelet[10365]: I0421 19:47:18.831329   10365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-595552" podStartSLOduration=0.831306017 podStartE2EDuration="831.306017ms" podCreationTimestamp="2024-04-21 19:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-21 19:47:18.811571463 +0000 UTC m=+0.890500945" watchObservedRunningTime="2024-04-21 19:47:18.831306017 +0000 UTC m=+0.910235508"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-595552 -n kubernetes-upgrade-595552
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-595552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-595552 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-595552 describe pod storage-provisioner: exit status 1 (61.688864ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-595552 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-595552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-595552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-595552: (1.111260197s)
--- FAIL: TestKubernetesUpgrade (847.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (288.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-867585 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-867585 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m48.316878177s)

                                                
                                                
-- stdout --
	* [old-k8s-version-867585] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-867585" primary control-plane node in "old-k8s-version-867585" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:38:46.533745   54509 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:38:46.533893   54509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:38:46.533906   54509 out.go:304] Setting ErrFile to fd 2...
	I0421 19:38:46.533915   54509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:38:46.534452   54509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:38:46.535469   54509 out.go:298] Setting JSON to false
	I0421 19:38:46.536464   54509 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4825,"bootTime":1713723502,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:38:46.536539   54509 start.go:139] virtualization: kvm guest
	I0421 19:38:46.538743   54509 out.go:177] * [old-k8s-version-867585] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:38:46.540301   54509 notify.go:220] Checking for updates...
	I0421 19:38:46.540339   54509 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:38:46.541776   54509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:38:46.542939   54509 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:38:46.544414   54509 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:38:46.545680   54509 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:38:46.547244   54509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:38:46.549377   54509 config.go:182] Loaded profile config "cert-options-015184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:38:46.549521   54509 config.go:182] Loaded profile config "kubernetes-upgrade-595552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0421 19:38:46.549701   54509 config.go:182] Loaded profile config "pause-321307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:38:46.549813   54509 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:38:46.588315   54509 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 19:38:46.589424   54509 start.go:297] selected driver: kvm2
	I0421 19:38:46.589435   54509 start.go:901] validating driver "kvm2" against <nil>
	I0421 19:38:46.589445   54509 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:38:46.590236   54509 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:38:46.590333   54509 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:38:46.605293   54509 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:38:46.605344   54509 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 19:38:46.605601   54509 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:38:46.605662   54509 cni.go:84] Creating CNI manager for ""
	I0421 19:38:46.605680   54509 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:38:46.605691   54509 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 19:38:46.605757   54509 start.go:340] cluster config:
	{Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:38:46.605893   54509 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:38:46.607606   54509 out.go:177] * Starting "old-k8s-version-867585" primary control-plane node in "old-k8s-version-867585" cluster
	I0421 19:38:46.609001   54509 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 19:38:46.609032   54509 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:38:46.609042   54509 cache.go:56] Caching tarball of preloaded images
	I0421 19:38:46.609111   54509 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:38:46.609121   54509 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0421 19:38:46.609202   54509 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/config.json ...
	I0421 19:38:46.609219   54509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/config.json: {Name:mk5d7a899d339d828b05f0d4db66221faebae7da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:38:46.609337   54509 start.go:360] acquireMachinesLock for old-k8s-version-867585: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:38:59.509702   54509 start.go:364] duration metric: took 12.900321436s to acquireMachinesLock for "old-k8s-version-867585"
	I0421 19:38:59.509792   54509 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:38:59.509932   54509 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 19:38:59.658523   54509 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:38:59.658786   54509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:38:59.658828   54509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:38:59.674049   54509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0421 19:38:59.674493   54509 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:38:59.675138   54509 main.go:141] libmachine: Using API Version  1
	I0421 19:38:59.675164   54509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:38:59.675537   54509 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:38:59.675717   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetMachineName
	I0421 19:38:59.675866   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:38:59.676007   54509 start.go:159] libmachine.API.Create for "old-k8s-version-867585" (driver="kvm2")
	I0421 19:38:59.676040   54509 client.go:168] LocalClient.Create starting
	I0421 19:38:59.676092   54509 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 19:38:59.676140   54509 main.go:141] libmachine: Decoding PEM data...
	I0421 19:38:59.676162   54509 main.go:141] libmachine: Parsing certificate...
	I0421 19:38:59.676227   54509 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 19:38:59.676256   54509 main.go:141] libmachine: Decoding PEM data...
	I0421 19:38:59.676277   54509 main.go:141] libmachine: Parsing certificate...
	I0421 19:38:59.676306   54509 main.go:141] libmachine: Running pre-create checks...
	I0421 19:38:59.676318   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .PreCreateCheck
	I0421 19:38:59.676705   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetConfigRaw
	I0421 19:38:59.677126   54509 main.go:141] libmachine: Creating machine...
	I0421 19:38:59.677141   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .Create
	I0421 19:38:59.677261   54509 main.go:141] libmachine: (old-k8s-version-867585) Creating KVM machine...
	I0421 19:38:59.678467   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found existing default KVM network
	I0421 19:38:59.679982   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:38:59.679828   54734 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c0:6b:83} reservation:<nil>}
	I0421 19:38:59.681245   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:38:59.681158   54734 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a0710}
	I0421 19:38:59.681275   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | created network xml: 
	I0421 19:38:59.681288   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | <network>
	I0421 19:38:59.681302   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |   <name>mk-old-k8s-version-867585</name>
	I0421 19:38:59.681312   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |   <dns enable='no'/>
	I0421 19:38:59.681322   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |   
	I0421 19:38:59.681336   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0421 19:38:59.681347   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |     <dhcp>
	I0421 19:38:59.681373   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0421 19:38:59.681391   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |     </dhcp>
	I0421 19:38:59.681401   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |   </ip>
	I0421 19:38:59.681409   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG |   
	I0421 19:38:59.681501   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | </network>
	I0421 19:38:59.681535   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | 
	I0421 19:38:59.825339   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | trying to create private KVM network mk-old-k8s-version-867585 192.168.50.0/24...
	I0421 19:38:59.911789   54509 main.go:141] libmachine: (old-k8s-version-867585) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585 ...
	I0421 19:38:59.911824   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | private KVM network mk-old-k8s-version-867585 192.168.50.0/24 created
	I0421 19:38:59.911842   54509 main.go:141] libmachine: (old-k8s-version-867585) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 19:38:59.911865   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:38:59.911697   54734 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:38:59.912009   54509 main.go:141] libmachine: (old-k8s-version-867585) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:39:00.157189   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:00.157021   54734 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa...
	I0421 19:39:00.267028   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:00.266861   54734 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/old-k8s-version-867585.rawdisk...
	I0421 19:39:00.267060   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Writing magic tar header
	I0421 19:39:00.267080   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Writing SSH key tar header
	I0421 19:39:00.267094   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:00.267060   54734 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585 ...
	I0421 19:39:00.267190   54509 main.go:141] libmachine: (old-k8s-version-867585) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585 (perms=drwx------)
	I0421 19:39:00.267212   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585
	I0421 19:39:00.267224   54509 main.go:141] libmachine: (old-k8s-version-867585) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 19:39:00.267248   54509 main.go:141] libmachine: (old-k8s-version-867585) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 19:39:00.267264   54509 main.go:141] libmachine: (old-k8s-version-867585) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 19:39:00.267280   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 19:39:00.267295   54509 main.go:141] libmachine: (old-k8s-version-867585) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 19:39:00.267312   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:39:00.267330   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 19:39:00.267348   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 19:39:00.267365   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Checking permissions on dir: /home/jenkins
	I0421 19:39:00.267377   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Checking permissions on dir: /home
	I0421 19:39:00.267386   54509 main.go:141] libmachine: (old-k8s-version-867585) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 19:39:00.267400   54509 main.go:141] libmachine: (old-k8s-version-867585) Creating domain...
	I0421 19:39:00.267441   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Skipping /home - not owner
	I0421 19:39:00.268577   54509 main.go:141] libmachine: (old-k8s-version-867585) define libvirt domain using xml: 
	I0421 19:39:00.268623   54509 main.go:141] libmachine: (old-k8s-version-867585) <domain type='kvm'>
	I0421 19:39:00.268636   54509 main.go:141] libmachine: (old-k8s-version-867585)   <name>old-k8s-version-867585</name>
	I0421 19:39:00.268653   54509 main.go:141] libmachine: (old-k8s-version-867585)   <memory unit='MiB'>2200</memory>
	I0421 19:39:00.268667   54509 main.go:141] libmachine: (old-k8s-version-867585)   <vcpu>2</vcpu>
	I0421 19:39:00.268683   54509 main.go:141] libmachine: (old-k8s-version-867585)   <features>
	I0421 19:39:00.268696   54509 main.go:141] libmachine: (old-k8s-version-867585)     <acpi/>
	I0421 19:39:00.268706   54509 main.go:141] libmachine: (old-k8s-version-867585)     <apic/>
	I0421 19:39:00.268728   54509 main.go:141] libmachine: (old-k8s-version-867585)     <pae/>
	I0421 19:39:00.268741   54509 main.go:141] libmachine: (old-k8s-version-867585)     
	I0421 19:39:00.268751   54509 main.go:141] libmachine: (old-k8s-version-867585)   </features>
	I0421 19:39:00.268765   54509 main.go:141] libmachine: (old-k8s-version-867585)   <cpu mode='host-passthrough'>
	I0421 19:39:00.268779   54509 main.go:141] libmachine: (old-k8s-version-867585)   
	I0421 19:39:00.268791   54509 main.go:141] libmachine: (old-k8s-version-867585)   </cpu>
	I0421 19:39:00.268808   54509 main.go:141] libmachine: (old-k8s-version-867585)   <os>
	I0421 19:39:00.268818   54509 main.go:141] libmachine: (old-k8s-version-867585)     <type>hvm</type>
	I0421 19:39:00.268829   54509 main.go:141] libmachine: (old-k8s-version-867585)     <boot dev='cdrom'/>
	I0421 19:39:00.268842   54509 main.go:141] libmachine: (old-k8s-version-867585)     <boot dev='hd'/>
	I0421 19:39:00.268857   54509 main.go:141] libmachine: (old-k8s-version-867585)     <bootmenu enable='no'/>
	I0421 19:39:00.268869   54509 main.go:141] libmachine: (old-k8s-version-867585)   </os>
	I0421 19:39:00.268883   54509 main.go:141] libmachine: (old-k8s-version-867585)   <devices>
	I0421 19:39:00.268896   54509 main.go:141] libmachine: (old-k8s-version-867585)     <disk type='file' device='cdrom'>
	I0421 19:39:00.268915   54509 main.go:141] libmachine: (old-k8s-version-867585)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/boot2docker.iso'/>
	I0421 19:39:00.268928   54509 main.go:141] libmachine: (old-k8s-version-867585)       <target dev='hdc' bus='scsi'/>
	I0421 19:39:00.268942   54509 main.go:141] libmachine: (old-k8s-version-867585)       <readonly/>
	I0421 19:39:00.268964   54509 main.go:141] libmachine: (old-k8s-version-867585)     </disk>
	I0421 19:39:00.268980   54509 main.go:141] libmachine: (old-k8s-version-867585)     <disk type='file' device='disk'>
	I0421 19:39:00.268994   54509 main.go:141] libmachine: (old-k8s-version-867585)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 19:39:00.269019   54509 main.go:141] libmachine: (old-k8s-version-867585)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/old-k8s-version-867585.rawdisk'/>
	I0421 19:39:00.269033   54509 main.go:141] libmachine: (old-k8s-version-867585)       <target dev='hda' bus='virtio'/>
	I0421 19:39:00.269043   54509 main.go:141] libmachine: (old-k8s-version-867585)     </disk>
	I0421 19:39:00.269051   54509 main.go:141] libmachine: (old-k8s-version-867585)     <interface type='network'>
	I0421 19:39:00.269062   54509 main.go:141] libmachine: (old-k8s-version-867585)       <source network='mk-old-k8s-version-867585'/>
	I0421 19:39:00.269075   54509 main.go:141] libmachine: (old-k8s-version-867585)       <model type='virtio'/>
	I0421 19:39:00.269089   54509 main.go:141] libmachine: (old-k8s-version-867585)     </interface>
	I0421 19:39:00.269102   54509 main.go:141] libmachine: (old-k8s-version-867585)     <interface type='network'>
	I0421 19:39:00.269116   54509 main.go:141] libmachine: (old-k8s-version-867585)       <source network='default'/>
	I0421 19:39:00.269127   54509 main.go:141] libmachine: (old-k8s-version-867585)       <model type='virtio'/>
	I0421 19:39:00.269135   54509 main.go:141] libmachine: (old-k8s-version-867585)     </interface>
	I0421 19:39:00.269147   54509 main.go:141] libmachine: (old-k8s-version-867585)     <serial type='pty'>
	I0421 19:39:00.269162   54509 main.go:141] libmachine: (old-k8s-version-867585)       <target port='0'/>
	I0421 19:39:00.269170   54509 main.go:141] libmachine: (old-k8s-version-867585)     </serial>
	I0421 19:39:00.269185   54509 main.go:141] libmachine: (old-k8s-version-867585)     <console type='pty'>
	I0421 19:39:00.269198   54509 main.go:141] libmachine: (old-k8s-version-867585)       <target type='serial' port='0'/>
	I0421 19:39:00.269211   54509 main.go:141] libmachine: (old-k8s-version-867585)     </console>
	I0421 19:39:00.269225   54509 main.go:141] libmachine: (old-k8s-version-867585)     <rng model='virtio'>
	I0421 19:39:00.269240   54509 main.go:141] libmachine: (old-k8s-version-867585)       <backend model='random'>/dev/random</backend>
	I0421 19:39:00.269253   54509 main.go:141] libmachine: (old-k8s-version-867585)     </rng>
	I0421 19:39:00.269264   54509 main.go:141] libmachine: (old-k8s-version-867585)     
	I0421 19:39:00.269276   54509 main.go:141] libmachine: (old-k8s-version-867585)     
	I0421 19:39:00.269286   54509 main.go:141] libmachine: (old-k8s-version-867585)   </devices>
	I0421 19:39:00.269298   54509 main.go:141] libmachine: (old-k8s-version-867585) </domain>
	I0421 19:39:00.269315   54509 main.go:141] libmachine: (old-k8s-version-867585) 
	I0421 19:39:00.398876   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:48:07:9d in network default
	I0421 19:39:00.399689   54509 main.go:141] libmachine: (old-k8s-version-867585) Ensuring networks are active...
	I0421 19:39:00.399729   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:00.400716   54509 main.go:141] libmachine: (old-k8s-version-867585) Ensuring network default is active
	I0421 19:39:00.401299   54509 main.go:141] libmachine: (old-k8s-version-867585) Ensuring network mk-old-k8s-version-867585 is active
	I0421 19:39:00.401872   54509 main.go:141] libmachine: (old-k8s-version-867585) Getting domain xml...
	I0421 19:39:00.402761   54509 main.go:141] libmachine: (old-k8s-version-867585) Creating domain...
	I0421 19:39:02.021791   54509 main.go:141] libmachine: (old-k8s-version-867585) Waiting to get IP...
	I0421 19:39:02.022781   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:02.023235   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:02.023285   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:02.023220   54734 retry.go:31] will retry after 263.98706ms: waiting for machine to come up
	I0421 19:39:02.288817   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:02.289325   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:02.289354   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:02.289278   54734 retry.go:31] will retry after 291.103058ms: waiting for machine to come up
	I0421 19:39:02.581836   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:02.582309   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:02.582344   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:02.582244   54734 retry.go:31] will retry after 426.902793ms: waiting for machine to come up
	I0421 19:39:03.011082   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:03.011566   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:03.011599   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:03.011519   54734 retry.go:31] will retry after 449.278177ms: waiting for machine to come up
	I0421 19:39:03.462071   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:03.462494   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:03.462525   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:03.462441   54734 retry.go:31] will retry after 731.678108ms: waiting for machine to come up
	I0421 19:39:04.196347   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:04.196813   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:04.196849   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:04.196785   54734 retry.go:31] will retry after 758.001946ms: waiting for machine to come up
	I0421 19:39:04.955929   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:04.956402   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:04.956437   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:04.956327   54734 retry.go:31] will retry after 852.959669ms: waiting for machine to come up
	I0421 19:39:05.811570   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:05.812113   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:05.812142   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:05.812056   54734 retry.go:31] will retry after 974.529446ms: waiting for machine to come up
	I0421 19:39:06.787950   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:06.788429   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:06.788467   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:06.788363   54734 retry.go:31] will retry after 1.147154958s: waiting for machine to come up
	I0421 19:39:07.936825   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:07.937241   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:07.937262   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:07.937181   54734 retry.go:31] will retry after 1.666484368s: waiting for machine to come up
	I0421 19:39:09.606134   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:09.606564   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:09.606590   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:09.606522   54734 retry.go:31] will retry after 2.293756276s: waiting for machine to come up
	I0421 19:39:11.903593   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:11.904212   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:11.904246   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:11.904186   54734 retry.go:31] will retry after 3.540518047s: waiting for machine to come up
	I0421 19:39:15.446336   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:15.446843   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:15.446880   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:15.446806   54734 retry.go:31] will retry after 4.116319048s: waiting for machine to come up
	I0421 19:39:19.568110   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:19.568573   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:39:19.568597   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:39:19.568527   54734 retry.go:31] will retry after 5.17720153s: waiting for machine to come up
	I0421 19:39:24.747007   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:24.747482   54509 main.go:141] libmachine: (old-k8s-version-867585) Found IP for machine: 192.168.50.42
	I0421 19:39:24.747512   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has current primary IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:24.747522   54509 main.go:141] libmachine: (old-k8s-version-867585) Reserving static IP address...
	I0421 19:39:24.747887   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-867585", mac: "52:54:00:00:e4:26", ip: "192.168.50.42"} in network mk-old-k8s-version-867585
	I0421 19:39:24.823570   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Getting to WaitForSSH function...
	I0421 19:39:24.823604   54509 main.go:141] libmachine: (old-k8s-version-867585) Reserved static IP address: 192.168.50.42
	I0421 19:39:24.823617   54509 main.go:141] libmachine: (old-k8s-version-867585) Waiting for SSH to be available...
	I0421 19:39:24.826154   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:24.826648   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:24.826693   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:24.826862   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Using SSH client type: external
	I0421 19:39:24.826904   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa (-rw-------)
	I0421 19:39:24.826944   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:39:24.826973   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | About to run SSH command:
	I0421 19:39:24.826994   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | exit 0
	I0421 19:39:24.950735   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | SSH cmd err, output: <nil>: 
	I0421 19:39:24.951071   54509 main.go:141] libmachine: (old-k8s-version-867585) KVM machine creation complete!
	I0421 19:39:24.951393   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetConfigRaw
	I0421 19:39:24.951944   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:39:24.952172   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:39:24.952330   54509 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 19:39:24.952355   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetState
	I0421 19:39:24.953647   54509 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 19:39:24.953663   54509 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 19:39:24.953669   54509 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 19:39:24.953674   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:24.956258   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:24.956594   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:24.956622   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:24.956775   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:24.956947   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:24.957105   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:24.957250   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:24.957434   54509 main.go:141] libmachine: Using SSH client type: native
	I0421 19:39:24.957668   54509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:39:24.957681   54509 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 19:39:25.065620   54509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:39:25.065649   54509 main.go:141] libmachine: Detecting the provisioner...
	I0421 19:39:25.065666   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:25.068477   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.068843   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:25.068871   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.069076   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:25.069255   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.069458   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.069608   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:25.069753   54509 main.go:141] libmachine: Using SSH client type: native
	I0421 19:39:25.069937   54509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:39:25.069947   54509 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 19:39:25.171826   54509 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 19:39:25.171901   54509 main.go:141] libmachine: found compatible host: buildroot
	I0421 19:39:25.171910   54509 main.go:141] libmachine: Provisioning with buildroot...
	I0421 19:39:25.171920   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetMachineName
	I0421 19:39:25.172166   54509 buildroot.go:166] provisioning hostname "old-k8s-version-867585"
	I0421 19:39:25.172199   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetMachineName
	I0421 19:39:25.172404   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:25.174968   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.175315   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:25.175340   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.175599   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:25.175785   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.175978   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.176114   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:25.176296   54509 main.go:141] libmachine: Using SSH client type: native
	I0421 19:39:25.176504   54509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:39:25.176517   54509 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-867585 && echo "old-k8s-version-867585" | sudo tee /etc/hostname
	I0421 19:39:25.299628   54509 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-867585
	
	I0421 19:39:25.299660   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:25.302446   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.302976   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:25.303006   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.303237   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:25.303458   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.303633   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.303770   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:25.303954   54509 main.go:141] libmachine: Using SSH client type: native
	I0421 19:39:25.304160   54509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:39:25.304194   54509 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-867585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-867585/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-867585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:39:25.416756   54509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:39:25.416790   54509 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:39:25.416814   54509 buildroot.go:174] setting up certificates
	I0421 19:39:25.416825   54509 provision.go:84] configureAuth start
	I0421 19:39:25.416838   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetMachineName
	I0421 19:39:25.417145   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:39:25.419708   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.420098   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:25.420128   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.420264   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:25.422507   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.422844   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:25.422878   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.423112   54509 provision.go:143] copyHostCerts
	I0421 19:39:25.423166   54509 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:39:25.423175   54509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:39:25.423232   54509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:39:25.423344   54509 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:39:25.423354   54509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:39:25.423374   54509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:39:25.423442   54509 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:39:25.423450   54509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:39:25.423467   54509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:39:25.423526   54509 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-867585 san=[127.0.0.1 192.168.50.42 localhost minikube old-k8s-version-867585]
	I0421 19:39:25.701103   54509 provision.go:177] copyRemoteCerts
	I0421 19:39:25.701156   54509 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:39:25.701178   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:25.703856   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.704230   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:25.704260   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.704419   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:25.704611   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.704777   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:25.704902   54509 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:39:25.787738   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:39:25.816337   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0421 19:39:25.843077   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:39:25.868598   54509 provision.go:87] duration metric: took 451.761201ms to configureAuth
	I0421 19:39:25.868624   54509 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:39:25.868779   54509 config.go:182] Loaded profile config "old-k8s-version-867585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0421 19:39:25.868848   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:25.871457   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.871787   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:25.871808   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:25.872002   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:25.872220   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.872382   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:25.872528   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:25.872680   54509 main.go:141] libmachine: Using SSH client type: native
	I0421 19:39:25.872834   54509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:39:25.872851   54509 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:39:26.151747   54509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:39:26.151774   54509 main.go:141] libmachine: Checking connection to Docker...
	I0421 19:39:26.151784   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetURL
	I0421 19:39:26.153124   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | Using libvirt version 6000000
	I0421 19:39:26.155303   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.155631   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:26.155659   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.155828   54509 main.go:141] libmachine: Docker is up and running!
	I0421 19:39:26.155842   54509 main.go:141] libmachine: Reticulating splines...
	I0421 19:39:26.155848   54509 client.go:171] duration metric: took 26.479799547s to LocalClient.Create
	I0421 19:39:26.155871   54509 start.go:167] duration metric: took 26.479866151s to libmachine.API.Create "old-k8s-version-867585"
	I0421 19:39:26.155880   54509 start.go:293] postStartSetup for "old-k8s-version-867585" (driver="kvm2")
	I0421 19:39:26.155891   54509 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:39:26.155906   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:39:26.156169   54509 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:39:26.156201   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:26.158365   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.158671   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:26.158698   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.158834   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:26.158989   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:26.159140   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:26.159276   54509 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:39:26.241031   54509 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:39:26.245861   54509 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:39:26.245896   54509 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:39:26.245965   54509 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:39:26.246106   54509 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:39:26.246233   54509 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:39:26.255891   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:39:26.282641   54509 start.go:296] duration metric: took 126.747303ms for postStartSetup
	I0421 19:39:26.282693   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetConfigRaw
	I0421 19:39:26.283356   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:39:26.286534   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.286892   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:26.286931   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.287179   54509 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/config.json ...
	I0421 19:39:26.287356   54509 start.go:128] duration metric: took 26.777413706s to createHost
	I0421 19:39:26.287380   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:26.289595   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.289920   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:26.289941   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.290096   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:26.290268   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:26.290398   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:26.290566   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:26.290737   54509 main.go:141] libmachine: Using SSH client type: native
	I0421 19:39:26.290890   54509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:39:26.290902   54509 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0421 19:39:26.391377   54509 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713728366.376861624
	
	I0421 19:39:26.391400   54509 fix.go:216] guest clock: 1713728366.376861624
	I0421 19:39:26.391409   54509 fix.go:229] Guest: 2024-04-21 19:39:26.376861624 +0000 UTC Remote: 2024-04-21 19:39:26.287366588 +0000 UTC m=+39.799829849 (delta=89.495036ms)
	I0421 19:39:26.391431   54509 fix.go:200] guest clock delta is within tolerance: 89.495036ms
	I0421 19:39:26.391437   54509 start.go:83] releasing machines lock for "old-k8s-version-867585", held for 26.881703462s
	I0421 19:39:26.391470   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:39:26.391794   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:39:26.394689   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.395117   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:26.395147   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.395352   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:39:26.395971   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:39:26.396192   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:39:26.396332   54509 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:39:26.396367   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:26.396444   54509 ssh_runner.go:195] Run: cat /version.json
	I0421 19:39:26.396467   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:39:26.399290   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.399644   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.399707   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:26.399759   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.399910   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:26.399933   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:26.399961   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:26.400123   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:26.400162   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:39:26.400262   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:39:26.400312   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:26.400405   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:39:26.400461   54509 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:39:26.400524   54509 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:39:26.479895   54509 ssh_runner.go:195] Run: systemctl --version
	I0421 19:39:26.503785   54509 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:39:26.678379   54509 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:39:26.686596   54509 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:39:26.686662   54509 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:39:26.707799   54509 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:39:26.707827   54509 start.go:494] detecting cgroup driver to use...
	I0421 19:39:26.707900   54509 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:39:26.727169   54509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:39:26.746432   54509 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:39:26.746509   54509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:39:26.762268   54509 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:39:26.779820   54509 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:39:26.905311   54509 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:39:27.091551   54509 docker.go:233] disabling docker service ...
	I0421 19:39:27.091621   54509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:39:27.110337   54509 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:39:27.126025   54509 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:39:27.277164   54509 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:39:27.436591   54509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:39:27.459709   54509 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:39:27.481565   54509 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0421 19:39:27.481628   54509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:39:27.496107   54509 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:39:27.496183   54509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:39:27.509330   54509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:39:27.523394   54509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:39:27.539404   54509 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:39:27.554470   54509 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:39:27.567298   54509 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:39:27.567348   54509 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:39:27.585849   54509 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:39:27.598087   54509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:39:27.755381   54509 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:39:27.913376   54509 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:39:27.913451   54509 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:39:27.919327   54509 start.go:562] Will wait 60s for crictl version
	I0421 19:39:27.919389   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:27.924322   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:39:27.972573   54509 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:39:27.972648   54509 ssh_runner.go:195] Run: crio --version
	I0421 19:39:28.008377   54509 ssh_runner.go:195] Run: crio --version
	I0421 19:39:28.048981   54509 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0421 19:39:28.050216   54509 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:39:28.053251   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:28.053660   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:39:16 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:39:28.053711   54509 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:39:28.053986   54509 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0421 19:39:28.058769   54509 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:39:28.078634   54509 kubeadm.go:877] updating cluster {Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:39:28.078729   54509 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 19:39:28.078767   54509 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:39:28.130449   54509 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0421 19:39:28.130536   54509 ssh_runner.go:195] Run: which lz4
	I0421 19:39:28.135589   54509 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0421 19:39:28.140395   54509 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:39:28.140422   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0421 19:39:30.249508   54509 crio.go:462] duration metric: took 2.113972197s to copy over tarball
	I0421 19:39:30.249574   54509 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 19:39:33.272051   54509 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022454281s)
	I0421 19:39:33.272074   54509 crio.go:469] duration metric: took 3.022538929s to extract the tarball
	I0421 19:39:33.272081   54509 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 19:39:33.321506   54509 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:39:33.371280   54509 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0421 19:39:33.371306   54509 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0421 19:39:33.371379   54509 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:39:33.371400   54509 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:39:33.371418   54509 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:39:33.371437   54509 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:39:33.371475   54509 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:39:33.371484   54509 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0421 19:39:33.371612   54509 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0421 19:39:33.371643   54509 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:39:33.372723   54509 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:39:33.372823   54509 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:39:33.372858   54509 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0421 19:39:33.372925   54509 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0421 19:39:33.372723   54509 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:39:33.372954   54509 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:39:33.373025   54509 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:39:33.373155   54509 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:39:33.512067   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0421 19:39:33.520981   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:39:33.525006   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:39:33.526887   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:39:33.526903   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:39:33.586448   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0421 19:39:33.593329   54509 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0421 19:39:33.593389   54509 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:39:33.593434   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:33.599516   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0421 19:39:33.680438   54509 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0421 19:39:33.680483   54509 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:39:33.680526   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:33.701047   54509 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0421 19:39:33.701092   54509 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:39:33.701122   54509 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0421 19:39:33.701147   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:33.701153   54509 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0421 19:39:33.701173   54509 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0421 19:39:33.701173   54509 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:39:33.701216   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:33.701264   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0421 19:39:33.701288   54509 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0421 19:39:33.701216   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:33.701309   54509 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:39:33.701347   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:33.748398   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:39:33.748441   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:39:33.748461   54509 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0421 19:39:33.748493   54509 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0421 19:39:33.748512   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0421 19:39:33.748526   54509 ssh_runner.go:195] Run: which crictl
	I0421 19:39:33.779973   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:39:33.780058   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:39:33.780143   54509 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0421 19:39:33.782698   54509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0421 19:39:33.883827   54509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0421 19:39:33.883834   54509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0421 19:39:33.883877   54509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0421 19:39:33.909092   54509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0421 19:39:33.924982   54509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0421 19:39:33.925113   54509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0421 19:39:34.303879   54509 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:39:34.453015   54509 cache_images.go:92] duration metric: took 1.0816914s to LoadCachedImages
	W0421 19:39:34.453114   54509 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0421 19:39:34.453144   54509 kubeadm.go:928] updating node { 192.168.50.42 8443 v1.20.0 crio true true} ...
	I0421 19:39:34.453246   54509 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-867585 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:39:34.453318   54509 ssh_runner.go:195] Run: crio config
	I0421 19:39:34.512693   54509 cni.go:84] Creating CNI manager for ""
	I0421 19:39:34.512720   54509 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:39:34.512735   54509 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:39:34.512759   54509 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.42 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-867585 NodeName:old-k8s-version-867585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0421 19:39:34.512934   54509 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-867585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:39:34.513006   54509 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0421 19:39:34.524847   54509 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:39:34.524917   54509 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 19:39:34.536127   54509 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0421 19:39:34.555227   54509 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:39:34.579156   54509 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0421 19:39:34.600014   54509 ssh_runner.go:195] Run: grep 192.168.50.42	control-plane.minikube.internal$ /etc/hosts
	I0421 19:39:34.605773   54509 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:39:34.620114   54509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:39:34.751178   54509 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:39:34.770155   54509 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585 for IP: 192.168.50.42
	I0421 19:39:34.770182   54509 certs.go:194] generating shared ca certs ...
	I0421 19:39:34.770201   54509 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:39:34.770366   54509 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 19:39:34.770418   54509 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 19:39:34.770430   54509 certs.go:256] generating profile certs ...
	I0421 19:39:34.770506   54509 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.key
	I0421 19:39:34.770526   54509 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt with IP's: []
	I0421 19:39:34.915440   54509 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt ...
	I0421 19:39:34.915477   54509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: {Name:mk6a8b686c37047249f490f5450772b6c7b4bbb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:39:34.915681   54509 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.key ...
	I0421 19:39:34.915714   54509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.key: {Name:mk2ebec5c0312b9011b099a2d232048b922ddfc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:39:34.915864   54509 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key.ada31577
	I0421 19:39:34.915892   54509 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.crt.ada31577 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.42]
	I0421 19:39:35.146258   54509 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.crt.ada31577 ...
	I0421 19:39:35.146286   54509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.crt.ada31577: {Name:mkb81675dd06b08c1e57439c47dbc83ccf29942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:39:35.146477   54509 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key.ada31577 ...
	I0421 19:39:35.146499   54509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key.ada31577: {Name:mkb5d83f46b01690f82921fafb6c0aa95cac6775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:39:35.146617   54509 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.crt.ada31577 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.crt
	I0421 19:39:35.146713   54509 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key.ada31577 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key
	I0421 19:39:35.146795   54509 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.key
	I0421 19:39:35.146817   54509 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.crt with IP's: []
	I0421 19:39:35.391691   54509 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.crt ...
	I0421 19:39:35.391720   54509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.crt: {Name:mka9c3ac375259d465c4a299e9c552118ca1c0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:39:35.436552   54509 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.key ...
	I0421 19:39:35.436592   54509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.key: {Name:mk9332fcd8fa524dc019490410c206d89d341c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:39:35.436864   54509 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 19:39:35.436916   54509 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 19:39:35.436933   54509 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 19:39:35.436971   54509 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 19:39:35.437005   54509 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 19:39:35.437038   54509 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 19:39:35.437098   54509 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:39:35.437647   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:39:35.467124   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:39:35.499512   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:39:35.530398   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:39:35.562705   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0421 19:39:35.589140   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:39:35.622251   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:39:35.657505   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 19:39:35.696665   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:39:35.723234   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 19:39:35.754345   54509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 19:39:35.800967   54509 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:39:35.837787   54509 ssh_runner.go:195] Run: openssl version
	I0421 19:39:35.845076   54509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 19:39:35.861038   54509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 19:39:35.868103   54509 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:39:35.868174   54509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 19:39:35.876859   54509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:39:35.894346   54509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:39:35.909660   54509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:39:35.915550   54509 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:39:35.915623   54509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:39:35.922687   54509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:39:35.939713   54509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 19:39:35.954310   54509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 19:39:35.960084   54509 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:39:35.960151   54509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 19:39:35.966979   54509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 19:39:35.986795   54509 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:39:35.994493   54509 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:39:35.994553   54509 kubeadm.go:391] StartCluster: {Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:39:35.994636   54509 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 19:39:35.994691   54509 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:39:36.045155   54509 cri.go:89] found id: ""
	I0421 19:39:36.045238   54509 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 19:39:36.062148   54509 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:39:36.078146   54509 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:39:36.094560   54509 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:39:36.094582   54509 kubeadm.go:156] found existing configuration files:
	
	I0421 19:39:36.094650   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:39:36.110455   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:39:36.110538   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:39:36.128052   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:39:36.144540   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:39:36.144619   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:39:36.161989   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:39:36.178846   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:39:36.178914   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:39:36.194039   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:39:36.206124   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:39:36.206210   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:39:36.223609   54509 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:39:36.371731   54509 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:39:36.371866   54509 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:39:36.556729   54509 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:39:36.556867   54509 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:39:36.556990   54509 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:39:36.825339   54509 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:39:36.826980   54509 out.go:204]   - Generating certificates and keys ...
	I0421 19:39:36.827075   54509 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:39:36.827158   54509 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:39:37.078161   54509 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 19:39:37.316663   54509 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 19:39:37.422212   54509 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 19:39:37.755668   54509 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 19:39:37.838990   54509 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 19:39:37.839160   54509 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-867585] and IPs [192.168.50.42 127.0.0.1 ::1]
	I0421 19:39:37.989278   54509 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 19:39:37.989494   54509 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-867585] and IPs [192.168.50.42 127.0.0.1 ::1]
	I0421 19:39:38.245142   54509 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 19:39:38.667390   54509 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 19:39:38.951333   54509 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 19:39:38.951751   54509 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:39:39.095323   54509 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:39:39.211487   54509 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:39:39.386286   54509 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:39:39.500502   54509 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:39:39.519666   54509 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:39:39.521463   54509 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:39:39.521535   54509 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:39:39.672563   54509 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:39:39.674757   54509 out.go:204]   - Booting up control plane ...
	I0421 19:39:39.674889   54509 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:39:39.687967   54509 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:39:39.688072   54509 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:39:39.689763   54509 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:39:39.694025   54509 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:40:19.691826   54509 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:40:19.691936   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:40:19.692298   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:40:24.693158   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:40:24.693479   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:40:34.694027   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:40:34.694280   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:40:54.695591   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:40:54.695909   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:41:34.695409   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:41:34.695657   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:41:34.695671   54509 kubeadm.go:309] 
	I0421 19:41:34.695720   54509 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:41:34.695777   54509 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:41:34.695789   54509 kubeadm.go:309] 
	I0421 19:41:34.695867   54509 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:41:34.695927   54509 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:41:34.696069   54509 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:41:34.696080   54509 kubeadm.go:309] 
	I0421 19:41:34.696217   54509 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:41:34.696257   54509 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:41:34.696299   54509 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:41:34.696308   54509 kubeadm.go:309] 
	I0421 19:41:34.696468   54509 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:41:34.696592   54509 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:41:34.696601   54509 kubeadm.go:309] 
	I0421 19:41:34.696744   54509 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:41:34.696879   54509 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:41:34.696990   54509 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:41:34.697094   54509 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:41:34.697116   54509 kubeadm.go:309] 
	I0421 19:41:34.697751   54509 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:41:34.697884   54509 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:41:34.697970   54509 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0421 19:41:34.698134   54509 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-867585] and IPs [192.168.50.42 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-867585] and IPs [192.168.50.42 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-867585] and IPs [192.168.50.42 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-867585] and IPs [192.168.50.42 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:41:34.698178   54509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:41:37.671012   54509 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.972783143s)
	I0421 19:41:37.671097   54509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:41:37.687639   54509 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:41:37.699870   54509 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:41:37.699892   54509 kubeadm.go:156] found existing configuration files:
	
	I0421 19:41:37.699940   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:41:37.711659   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:41:37.711743   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:41:37.723427   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:41:37.734412   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:41:37.734470   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:41:37.745573   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:41:37.756644   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:41:37.756708   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:41:37.768688   54509 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:41:37.779405   54509 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:41:37.779475   54509 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:41:37.790281   54509 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:41:37.861649   54509 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:41:37.861752   54509 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:41:38.027449   54509 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:41:38.027547   54509 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:41:38.027678   54509 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:41:38.255934   54509 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:41:38.257851   54509 out.go:204]   - Generating certificates and keys ...
	I0421 19:41:38.257948   54509 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:41:38.258040   54509 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:41:38.258170   54509 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:41:38.258265   54509 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:41:38.258387   54509 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:41:38.258460   54509 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:41:38.258837   54509 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:41:38.259400   54509 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:41:38.259845   54509 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:41:38.260229   54509 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:41:38.260280   54509 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:41:38.260379   54509 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:41:38.412634   54509 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:41:38.679570   54509 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:41:38.782146   54509 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:41:38.930665   54509 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:41:38.945511   54509 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:41:38.946671   54509 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:41:38.946848   54509 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:41:39.126742   54509 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:41:39.128395   54509 out.go:204]   - Booting up control plane ...
	I0421 19:41:39.128502   54509 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:41:39.139592   54509 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:41:39.139689   54509 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:41:39.141907   54509 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:41:39.146753   54509 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:42:19.148850   54509 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:42:19.149091   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:42:19.149850   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:42:24.151018   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:42:24.151244   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:42:34.151953   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:42:34.152169   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:42:54.153478   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:42:54.153723   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:43:34.153619   54509 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:43:34.153836   54509 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:43:34.153853   54509 kubeadm.go:309] 
	I0421 19:43:34.153907   54509 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:43:34.153947   54509 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:43:34.153954   54509 kubeadm.go:309] 
	I0421 19:43:34.153983   54509 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:43:34.154039   54509 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:43:34.154236   54509 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:43:34.154251   54509 kubeadm.go:309] 
	I0421 19:43:34.154388   54509 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:43:34.154442   54509 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:43:34.154487   54509 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:43:34.154499   54509 kubeadm.go:309] 
	I0421 19:43:34.154637   54509 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:43:34.154781   54509 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:43:34.154794   54509 kubeadm.go:309] 
	I0421 19:43:34.154941   54509 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:43:34.155073   54509 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:43:34.155211   54509 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:43:34.155326   54509 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:43:34.155354   54509 kubeadm.go:309] 
	I0421 19:43:34.155772   54509 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:43:34.155881   54509 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:43:34.155943   54509 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:43:34.156000   54509 kubeadm.go:393] duration metric: took 3m58.161452319s to StartCluster
	I0421 19:43:34.156054   54509 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:43:34.156107   54509 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:43:34.205254   54509 cri.go:89] found id: ""
	I0421 19:43:34.205280   54509 logs.go:276] 0 containers: []
	W0421 19:43:34.205289   54509 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:43:34.205295   54509 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:43:34.205343   54509 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:43:34.244867   54509 cri.go:89] found id: ""
	I0421 19:43:34.244897   54509 logs.go:276] 0 containers: []
	W0421 19:43:34.244905   54509 logs.go:278] No container was found matching "etcd"
	I0421 19:43:34.244911   54509 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:43:34.244959   54509 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:43:34.282550   54509 cri.go:89] found id: ""
	I0421 19:43:34.282580   54509 logs.go:276] 0 containers: []
	W0421 19:43:34.282592   54509 logs.go:278] No container was found matching "coredns"
	I0421 19:43:34.282600   54509 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:43:34.282663   54509 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:43:34.323222   54509 cri.go:89] found id: ""
	I0421 19:43:34.323246   54509 logs.go:276] 0 containers: []
	W0421 19:43:34.323255   54509 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:43:34.323260   54509 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:43:34.323314   54509 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:43:34.365557   54509 cri.go:89] found id: ""
	I0421 19:43:34.365588   54509 logs.go:276] 0 containers: []
	W0421 19:43:34.365599   54509 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:43:34.365606   54509 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:43:34.365668   54509 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:43:34.403535   54509 cri.go:89] found id: ""
	I0421 19:43:34.403565   54509 logs.go:276] 0 containers: []
	W0421 19:43:34.403575   54509 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:43:34.403584   54509 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:43:34.403642   54509 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:43:34.442249   54509 cri.go:89] found id: ""
	I0421 19:43:34.442278   54509 logs.go:276] 0 containers: []
	W0421 19:43:34.442289   54509 logs.go:278] No container was found matching "kindnet"
	I0421 19:43:34.442299   54509 logs.go:123] Gathering logs for kubelet ...
	I0421 19:43:34.442314   54509 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:43:34.494252   54509 logs.go:123] Gathering logs for dmesg ...
	I0421 19:43:34.494285   54509 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:43:34.509636   54509 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:43:34.509663   54509 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:43:34.632229   54509 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:43:34.632255   54509 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:43:34.632272   54509 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:43:34.731122   54509 logs.go:123] Gathering logs for container status ...
	I0421 19:43:34.731159   54509 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0421 19:43:34.785839   54509 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:43:34.785893   54509 out.go:239] * 
	* 
	W0421 19:43:34.785952   54509 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:43:34.785982   54509 out.go:239] * 
	* 
	W0421 19:43:34.787187   54509 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:43:34.790722   54509 out.go:177] 
	W0421 19:43:34.791989   54509 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:43:34.792038   54509 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:43:34.792060   54509 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:43:34.794683   54509 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-867585 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 6 (240.357926ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:43:35.079861   57328 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-867585" does not appear in /home/jenkins/minikube-integration/18702-3854/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867585" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (288.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-167454 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-167454 --alsologtostderr -v=3: exit status 82 (2m0.548720645s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-167454"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:41:26.734995   56664 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:41:26.735113   56664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:41:26.735118   56664 out.go:304] Setting ErrFile to fd 2...
	I0421 19:41:26.735123   56664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:41:26.735326   56664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:41:26.735574   56664 out.go:298] Setting JSON to false
	I0421 19:41:26.735658   56664 mustload.go:65] Loading cluster: default-k8s-diff-port-167454
	I0421 19:41:26.736000   56664 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:41:26.736061   56664 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/default-k8s-diff-port-167454/config.json ...
	I0421 19:41:26.736231   56664 mustload.go:65] Loading cluster: default-k8s-diff-port-167454
	I0421 19:41:26.736332   56664 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:41:26.736360   56664 stop.go:39] StopHost: default-k8s-diff-port-167454
	I0421 19:41:26.736704   56664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:41:26.736759   56664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:41:26.751797   56664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I0421 19:41:26.752256   56664 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:41:26.752809   56664 main.go:141] libmachine: Using API Version  1
	I0421 19:41:26.752829   56664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:41:26.753222   56664 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:41:26.756091   56664 out.go:177] * Stopping node "default-k8s-diff-port-167454"  ...
	I0421 19:41:26.757942   56664 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0421 19:41:26.757983   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:41:26.758287   56664 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0421 19:41:26.758314   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:41:26.761388   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:41:26.761829   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:40:33 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:41:26.761872   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:41:26.762024   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:41:26.762232   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:41:26.762416   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:41:26.762559   56664 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:41:26.878224   56664 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0421 19:41:26.947687   56664 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0421 19:41:27.021609   56664 main.go:141] libmachine: Stopping "default-k8s-diff-port-167454"...
	I0421 19:41:27.021653   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:41:27.023178   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Stop
	I0421 19:41:27.027212   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 0/120
	I0421 19:41:28.028623   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 1/120
	I0421 19:41:29.029841   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 2/120
	I0421 19:41:30.031369   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 3/120
	I0421 19:41:31.032872   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 4/120
	I0421 19:41:32.034863   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 5/120
	I0421 19:41:33.036604   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 6/120
	I0421 19:41:34.038281   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 7/120
	I0421 19:41:35.040013   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 8/120
	I0421 19:41:36.041407   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 9/120
	I0421 19:41:37.043647   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 10/120
	I0421 19:41:38.046363   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 11/120
	I0421 19:41:39.048751   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 12/120
	I0421 19:41:40.050262   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 13/120
	I0421 19:41:41.051938   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 14/120
	I0421 19:41:42.054267   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 15/120
	I0421 19:41:43.056664   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 16/120
	I0421 19:41:44.058137   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 17/120
	I0421 19:41:45.059616   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 18/120
	I0421 19:41:46.061220   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 19/120
	I0421 19:41:47.063664   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 20/120
	I0421 19:41:48.065006   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 21/120
	I0421 19:41:49.066472   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 22/120
	I0421 19:41:50.068490   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 23/120
	I0421 19:41:51.070108   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 24/120
	I0421 19:41:52.072196   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 25/120
	I0421 19:41:53.073329   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 26/120
	I0421 19:41:54.074725   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 27/120
	I0421 19:41:55.076077   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 28/120
	I0421 19:41:56.077337   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 29/120
	I0421 19:41:57.079674   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 30/120
	I0421 19:41:58.080827   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 31/120
	I0421 19:41:59.082288   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 32/120
	I0421 19:42:00.084533   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 33/120
	I0421 19:42:01.085776   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 34/120
	I0421 19:42:02.088073   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 35/120
	I0421 19:42:03.089470   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 36/120
	I0421 19:42:04.090600   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 37/120
	I0421 19:42:05.092398   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 38/120
	I0421 19:42:06.093814   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 39/120
	I0421 19:42:07.096013   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 40/120
	I0421 19:42:08.097207   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 41/120
	I0421 19:42:09.098699   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 42/120
	I0421 19:42:10.100571   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 43/120
	I0421 19:42:11.101974   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 44/120
	I0421 19:42:12.103813   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 45/120
	I0421 19:42:13.105421   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 46/120
	I0421 19:42:14.106808   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 47/120
	I0421 19:42:15.108547   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 48/120
	I0421 19:42:16.109900   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 49/120
	I0421 19:42:17.112046   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 50/120
	I0421 19:42:18.113428   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 51/120
	I0421 19:42:19.114643   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 52/120
	I0421 19:42:20.116353   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 53/120
	I0421 19:42:21.117428   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 54/120
	I0421 19:42:22.119311   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 55/120
	I0421 19:42:23.120495   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 56/120
	I0421 19:42:24.122072   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 57/120
	I0421 19:42:25.124123   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 58/120
	I0421 19:42:26.125457   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 59/120
	I0421 19:42:27.127721   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 60/120
	I0421 19:42:28.129001   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 61/120
	I0421 19:42:29.130441   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 62/120
	I0421 19:42:30.132480   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 63/120
	I0421 19:42:31.133877   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 64/120
	I0421 19:42:32.135873   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 65/120
	I0421 19:42:33.137284   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 66/120
	I0421 19:42:34.138771   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 67/120
	I0421 19:42:35.140484   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 68/120
	I0421 19:42:36.141873   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 69/120
	I0421 19:42:37.144001   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 70/120
	I0421 19:42:38.145295   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 71/120
	I0421 19:42:39.146700   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 72/120
	I0421 19:42:40.148590   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 73/120
	I0421 19:42:41.150107   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 74/120
	I0421 19:42:42.152084   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 75/120
	I0421 19:42:43.153966   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 76/120
	I0421 19:42:44.155312   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 77/120
	I0421 19:42:45.156878   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 78/120
	I0421 19:42:46.158101   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 79/120
	I0421 19:42:47.159518   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 80/120
	I0421 19:42:48.161330   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 81/120
	I0421 19:42:49.163007   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 82/120
	I0421 19:42:50.164448   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 83/120
	I0421 19:42:51.165756   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 84/120
	I0421 19:42:52.167670   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 85/120
	I0421 19:42:53.169076   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 86/120
	I0421 19:42:54.170420   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 87/120
	I0421 19:42:55.172401   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 88/120
	I0421 19:42:56.173901   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 89/120
	I0421 19:42:57.176166   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 90/120
	I0421 19:42:58.177892   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 91/120
	I0421 19:42:59.179201   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 92/120
	I0421 19:43:00.180684   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 93/120
	I0421 19:43:01.182190   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 94/120
	I0421 19:43:02.183522   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 95/120
	I0421 19:43:03.184853   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 96/120
	I0421 19:43:04.186122   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 97/120
	I0421 19:43:05.187587   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 98/120
	I0421 19:43:06.189014   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 99/120
	I0421 19:43:07.191074   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 100/120
	I0421 19:43:08.192576   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 101/120
	I0421 19:43:09.193921   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 102/120
	I0421 19:43:10.195174   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 103/120
	I0421 19:43:11.196321   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 104/120
	I0421 19:43:12.198191   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 105/120
	I0421 19:43:13.199585   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 106/120
	I0421 19:43:14.200928   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 107/120
	I0421 19:43:15.202125   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 108/120
	I0421 19:43:16.203379   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 109/120
	I0421 19:43:17.205513   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 110/120
	I0421 19:43:18.206904   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 111/120
	I0421 19:43:19.208431   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 112/120
	I0421 19:43:20.209542   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 113/120
	I0421 19:43:21.210889   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 114/120
	I0421 19:43:22.212914   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 115/120
	I0421 19:43:23.214271   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 116/120
	I0421 19:43:24.215690   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 117/120
	I0421 19:43:25.216918   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 118/120
	I0421 19:43:26.218306   56664 main.go:141] libmachine: (default-k8s-diff-port-167454) Waiting for machine to stop 119/120
	I0421 19:43:27.219465   56664 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0421 19:43:27.219514   56664 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0421 19:43:27.221616   56664 out.go:177] 
	W0421 19:43:27.223205   56664 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0421 19:43:27.223224   56664 out.go:239] * 
	* 
	W0421 19:43:27.225788   56664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:43:27.227323   56664 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-167454 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454: exit status 3 (18.535663629s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:43:45.762465   57271 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.23:22: connect: no route to host
	E0421 19:43:45.762483   57271 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.23:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167454" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-597568 --alsologtostderr -v=3
E0421 19:42:32.256930   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-597568 --alsologtostderr -v=3: exit status 82 (2m0.505881948s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-597568"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:42:01.890671   56885 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:42:01.890792   56885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:42:01.890802   56885 out.go:304] Setting ErrFile to fd 2...
	I0421 19:42:01.890806   56885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:42:01.891001   56885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:42:01.891236   56885 out.go:298] Setting JSON to false
	I0421 19:42:01.891314   56885 mustload.go:65] Loading cluster: no-preload-597568
	I0421 19:42:01.891643   56885 config.go:182] Loaded profile config "no-preload-597568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:42:01.891724   56885 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/no-preload-597568/config.json ...
	I0421 19:42:01.891919   56885 mustload.go:65] Loading cluster: no-preload-597568
	I0421 19:42:01.892043   56885 config.go:182] Loaded profile config "no-preload-597568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:42:01.892083   56885 stop.go:39] StopHost: no-preload-597568
	I0421 19:42:01.892517   56885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:42:01.892563   56885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:42:01.907467   56885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0421 19:42:01.908014   56885 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:42:01.908638   56885 main.go:141] libmachine: Using API Version  1
	I0421 19:42:01.908665   56885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:42:01.909003   56885 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:42:01.911370   56885 out.go:177] * Stopping node "no-preload-597568"  ...
	I0421 19:42:01.913176   56885 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0421 19:42:01.913216   56885 main.go:141] libmachine: (no-preload-597568) Calling .DriverName
	I0421 19:42:01.913443   56885 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0421 19:42:01.913469   56885 main.go:141] libmachine: (no-preload-597568) Calling .GetSSHHostname
	I0421 19:42:01.916377   56885 main.go:141] libmachine: (no-preload-597568) DBG | domain no-preload-597568 has defined MAC address 52:54:00:4e:bb:cd in network mk-no-preload-597568
	I0421 19:42:01.916821   56885 main.go:141] libmachine: (no-preload-597568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:bb:cd", ip: ""} in network mk-no-preload-597568: {Iface:virbr1 ExpiryTime:2024-04-21 20:40:05 +0000 UTC Type:0 Mac:52:54:00:4e:bb:cd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:no-preload-597568 Clientid:01:52:54:00:4e:bb:cd}
	I0421 19:42:01.916849   56885 main.go:141] libmachine: (no-preload-597568) DBG | domain no-preload-597568 has defined IP address 192.168.39.120 and MAC address 52:54:00:4e:bb:cd in network mk-no-preload-597568
	I0421 19:42:01.916984   56885 main.go:141] libmachine: (no-preload-597568) Calling .GetSSHPort
	I0421 19:42:01.917136   56885 main.go:141] libmachine: (no-preload-597568) Calling .GetSSHKeyPath
	I0421 19:42:01.917295   56885 main.go:141] libmachine: (no-preload-597568) Calling .GetSSHUsername
	I0421 19:42:01.917414   56885 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/no-preload-597568/id_rsa Username:docker}
	I0421 19:42:02.035179   56885 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0421 19:42:02.080777   56885 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0421 19:42:02.143238   56885 main.go:141] libmachine: Stopping "no-preload-597568"...
	I0421 19:42:02.143272   56885 main.go:141] libmachine: (no-preload-597568) Calling .GetState
	I0421 19:42:02.144880   56885 main.go:141] libmachine: (no-preload-597568) Calling .Stop
	I0421 19:42:02.148455   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 0/120
	I0421 19:42:03.149807   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 1/120
	I0421 19:42:04.151318   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 2/120
	I0421 19:42:05.152815   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 3/120
	I0421 19:42:06.154210   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 4/120
	I0421 19:42:07.156169   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 5/120
	I0421 19:42:08.157692   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 6/120
	I0421 19:42:09.158900   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 7/120
	I0421 19:42:10.160336   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 8/120
	I0421 19:42:11.161526   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 9/120
	I0421 19:42:12.163169   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 10/120
	I0421 19:42:13.164521   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 11/120
	I0421 19:42:14.165806   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 12/120
	I0421 19:42:15.166889   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 13/120
	I0421 19:42:16.168693   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 14/120
	I0421 19:42:17.170643   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 15/120
	I0421 19:42:18.172503   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 16/120
	I0421 19:42:19.173575   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 17/120
	I0421 19:42:20.174862   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 18/120
	I0421 19:42:21.176050   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 19/120
	I0421 19:42:22.178135   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 20/120
	I0421 19:42:23.179381   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 21/120
	I0421 19:42:24.180470   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 22/120
	I0421 19:42:25.182036   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 23/120
	I0421 19:42:26.183173   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 24/120
	I0421 19:42:27.184875   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 25/120
	I0421 19:42:28.186073   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 26/120
	I0421 19:42:29.187611   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 27/120
	I0421 19:42:30.188841   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 28/120
	I0421 19:42:31.190636   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 29/120
	I0421 19:42:32.192679   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 30/120
	I0421 19:42:33.194215   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 31/120
	I0421 19:42:34.195431   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 32/120
	I0421 19:42:35.196670   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 33/120
	I0421 19:42:36.197872   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 34/120
	I0421 19:42:37.199917   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 35/120
	I0421 19:42:38.201338   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 36/120
	I0421 19:42:39.202752   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 37/120
	I0421 19:42:40.204110   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 38/120
	I0421 19:42:41.205308   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 39/120
	I0421 19:42:42.207755   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 40/120
	I0421 19:42:43.208941   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 41/120
	I0421 19:42:44.211145   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 42/120
	I0421 19:42:45.212317   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 43/120
	I0421 19:42:46.213670   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 44/120
	I0421 19:42:47.215624   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 45/120
	I0421 19:42:48.216806   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 46/120
	I0421 19:42:49.218109   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 47/120
	I0421 19:42:50.219407   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 48/120
	I0421 19:42:51.220661   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 49/120
	I0421 19:42:52.222873   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 50/120
	I0421 19:42:53.224341   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 51/120
	I0421 19:42:54.226185   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 52/120
	I0421 19:42:55.227366   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 53/120
	I0421 19:42:56.229055   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 54/120
	I0421 19:42:57.231254   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 55/120
	I0421 19:42:58.232822   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 56/120
	I0421 19:42:59.234176   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 57/120
	I0421 19:43:00.235453   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 58/120
	I0421 19:43:01.236767   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 59/120
	I0421 19:43:02.238963   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 60/120
	I0421 19:43:03.240490   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 61/120
	I0421 19:43:04.241862   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 62/120
	I0421 19:43:05.243250   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 63/120
	I0421 19:43:06.244658   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 64/120
	I0421 19:43:07.246790   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 65/120
	I0421 19:43:08.248239   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 66/120
	I0421 19:43:09.249775   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 67/120
	I0421 19:43:10.251955   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 68/120
	I0421 19:43:11.253478   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 69/120
	I0421 19:43:12.255547   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 70/120
	I0421 19:43:13.256714   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 71/120
	I0421 19:43:14.258323   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 72/120
	I0421 19:43:15.259722   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 73/120
	I0421 19:43:16.261211   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 74/120
	I0421 19:43:17.262702   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 75/120
	I0421 19:43:18.264292   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 76/120
	I0421 19:43:19.265776   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 77/120
	I0421 19:43:20.267253   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 78/120
	I0421 19:43:21.268753   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 79/120
	I0421 19:43:22.270721   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 80/120
	I0421 19:43:23.272474   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 81/120
	I0421 19:43:24.273844   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 82/120
	I0421 19:43:25.275199   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 83/120
	I0421 19:43:26.276609   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 84/120
	I0421 19:43:27.278704   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 85/120
	I0421 19:43:28.280164   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 86/120
	I0421 19:43:29.281675   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 87/120
	I0421 19:43:30.283174   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 88/120
	I0421 19:43:31.284602   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 89/120
	I0421 19:43:32.286617   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 90/120
	I0421 19:43:33.288450   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 91/120
	I0421 19:43:34.290418   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 92/120
	I0421 19:43:35.292552   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 93/120
	I0421 19:43:36.293678   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 94/120
	I0421 19:43:37.295561   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 95/120
	I0421 19:43:38.296824   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 96/120
	I0421 19:43:39.298230   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 97/120
	I0421 19:43:40.299498   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 98/120
	I0421 19:43:41.300737   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 99/120
	I0421 19:43:42.302958   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 100/120
	I0421 19:43:43.304420   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 101/120
	I0421 19:43:44.305865   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 102/120
	I0421 19:43:45.307394   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 103/120
	I0421 19:43:46.308676   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 104/120
	I0421 19:43:47.310578   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 105/120
	I0421 19:43:48.312597   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 106/120
	I0421 19:43:49.314103   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 107/120
	I0421 19:43:50.315436   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 108/120
	I0421 19:43:51.316807   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 109/120
	I0421 19:43:52.319120   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 110/120
	I0421 19:43:53.320414   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 111/120
	I0421 19:43:54.321958   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 112/120
	I0421 19:43:55.323419   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 113/120
	I0421 19:43:56.324749   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 114/120
	I0421 19:43:57.326117   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 115/120
	I0421 19:43:58.327288   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 116/120
	I0421 19:43:59.328575   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 117/120
	I0421 19:44:00.330222   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 118/120
	I0421 19:44:01.331684   56885 main.go:141] libmachine: (no-preload-597568) Waiting for machine to stop 119/120
	I0421 19:44:02.332996   56885 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0421 19:44:02.333046   56885 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0421 19:44:02.335112   56885 out.go:177] 
	W0421 19:44:02.336655   56885 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0421 19:44:02.336689   56885 out.go:239] * 
	* 
	W0421 19:44:02.339437   56885 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:44:02.340801   56885 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-597568 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568
E0421 19:44:06.205122   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568: exit status 3 (18.492467405s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:44:20.834385   57675 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0421 19:44:20.834405   57675 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-597568" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-867585 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-867585 create -f testdata/busybox.yaml: exit status 1 (41.081495ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-867585" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-867585 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 6 (235.53317ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:43:35.355345   57367 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-867585" does not appear in /home/jenkins/minikube-integration/18702-3854/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867585" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 6 (235.631553ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:43:35.593079   57413 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-867585" does not appear in /home/jenkins/minikube-integration/18702-3854/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867585" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-867585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-867585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.953695503s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-867585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-867585 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-867585 describe deploy/metrics-server -n kube-system: exit status 1 (40.949821ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-867585" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-867585 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 6 (233.562977ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:45:11.821047   58100 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-867585" does not appear in /home/jenkins/minikube-integration/18702-3854/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-867585" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454: exit status 3 (3.16530079s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:43:48.930365   57507 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.23:22: connect: no route to host
	E0421 19:43:48.930404   57507 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.23:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-167454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-167454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15340097s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.23:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-167454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454: exit status 3 (3.062755517s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:43:58.146449   57571 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.23:22: connect: no route to host
	E0421 19:43:58.146472   57571 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.23:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167454" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568: exit status 3 (3.19942808s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:44:24.034451   57772 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0421 19:44:24.034478   57772 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-597568 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-597568 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153246671s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-597568 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568: exit status 3 (3.062407389s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:44:33.250412   57855 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0421 19:44:33.250437   57855 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-597568" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (734.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-867585 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0421 19:46:09.208970   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-867585 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m12.812856558s)

                                                
                                                
-- stdout --
	* [old-k8s-version-867585] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-867585" primary control-plane node in "old-k8s-version-867585" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-867585" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:45:14.424926   58211 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:45:14.425056   58211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:45:14.425066   58211 out.go:304] Setting ErrFile to fd 2...
	I0421 19:45:14.425072   58211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:45:14.425272   58211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:45:14.425841   58211 out.go:298] Setting JSON to false
	I0421 19:45:14.426828   58211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5212,"bootTime":1713723502,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:45:14.426890   58211 start.go:139] virtualization: kvm guest
	I0421 19:45:14.429358   58211 out.go:177] * [old-k8s-version-867585] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:45:14.430916   58211 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:45:14.432394   58211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:45:14.430972   58211 notify.go:220] Checking for updates...
	I0421 19:45:14.435106   58211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:45:14.436519   58211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:45:14.438014   58211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:45:14.439322   58211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:45:14.440980   58211 config.go:182] Loaded profile config "old-k8s-version-867585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0421 19:45:14.441380   58211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:45:14.441414   58211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:45:14.456399   58211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40747
	I0421 19:45:14.456806   58211 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:45:14.457431   58211 main.go:141] libmachine: Using API Version  1
	I0421 19:45:14.457464   58211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:45:14.457901   58211 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:45:14.458214   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:45:14.460134   58211 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0421 19:45:14.461518   58211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:45:14.461812   58211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:45:14.461845   58211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:45:14.476476   58211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0421 19:45:14.476892   58211 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:45:14.477414   58211 main.go:141] libmachine: Using API Version  1
	I0421 19:45:14.477439   58211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:45:14.477742   58211 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:45:14.477956   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:45:14.515718   58211 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:45:14.517040   58211 start.go:297] selected driver: kvm2
	I0421 19:45:14.517057   58211 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:45:14.517196   58211 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:45:14.517976   58211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:45:14.518091   58211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:45:14.533568   58211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:45:14.535730   58211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:45:14.535792   58211 cni.go:84] Creating CNI manager for ""
	I0421 19:45:14.535805   58211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:45:14.535871   58211 start.go:340] cluster config:
	{Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:45:14.536005   58211 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:45:14.538849   58211 out.go:177] * Starting "old-k8s-version-867585" primary control-plane node in "old-k8s-version-867585" cluster
	I0421 19:45:14.540193   58211 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 19:45:14.540239   58211 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:45:14.540249   58211 cache.go:56] Caching tarball of preloaded images
	I0421 19:45:14.540364   58211 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:45:14.540402   58211 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0421 19:45:14.540534   58211 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/config.json ...
	I0421 19:45:14.540764   58211 start.go:360] acquireMachinesLock for old-k8s-version-867585: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:48:55.555714   58211 start.go:364] duration metric: took 3m41.014910589s to acquireMachinesLock for "old-k8s-version-867585"
	I0421 19:48:55.555780   58211 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:48:55.555787   58211 fix.go:54] fixHost starting: 
	I0421 19:48:55.556200   58211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:48:55.556236   58211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:48:55.572118   58211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0421 19:48:55.572550   58211 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:48:55.573029   58211 main.go:141] libmachine: Using API Version  1
	I0421 19:48:55.573058   58211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:48:55.573369   58211 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:48:55.573574   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:48:55.573721   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetState
	I0421 19:48:55.575240   58211 fix.go:112] recreateIfNeeded on old-k8s-version-867585: state=Stopped err=<nil>
	I0421 19:48:55.575260   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	W0421 19:48:55.575408   58211 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:48:55.577693   58211 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-867585" ...
	I0421 19:48:55.579316   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .Start
	I0421 19:48:55.579491   58211 main.go:141] libmachine: (old-k8s-version-867585) Ensuring networks are active...
	I0421 19:48:55.580120   58211 main.go:141] libmachine: (old-k8s-version-867585) Ensuring network default is active
	I0421 19:48:55.580506   58211 main.go:141] libmachine: (old-k8s-version-867585) Ensuring network mk-old-k8s-version-867585 is active
	I0421 19:48:55.580901   58211 main.go:141] libmachine: (old-k8s-version-867585) Getting domain xml...
	I0421 19:48:55.581639   58211 main.go:141] libmachine: (old-k8s-version-867585) Creating domain...
	I0421 19:48:56.810478   58211 main.go:141] libmachine: (old-k8s-version-867585) Waiting to get IP...
	I0421 19:48:56.811423   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:48:56.811859   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:48:56.811942   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:48:56.811845   59283 retry.go:31] will retry after 203.985509ms: waiting for machine to come up
	I0421 19:48:57.017397   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:48:57.017899   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:48:57.017920   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:48:57.017870   59283 retry.go:31] will retry after 322.423408ms: waiting for machine to come up
	I0421 19:48:57.342523   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:48:57.343059   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:48:57.343091   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:48:57.343022   59283 retry.go:31] will retry after 322.051582ms: waiting for machine to come up
	I0421 19:48:57.666422   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:48:57.666929   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:48:57.666957   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:48:57.666887   59283 retry.go:31] will retry after 425.975783ms: waiting for machine to come up
	I0421 19:48:58.094484   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:48:58.094934   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:48:58.094965   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:48:58.094888   59283 retry.go:31] will retry after 644.821616ms: waiting for machine to come up
	I0421 19:48:58.741843   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:48:58.742366   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:48:58.742398   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:48:58.742310   59283 retry.go:31] will retry after 842.351042ms: waiting for machine to come up
	I0421 19:48:59.586590   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:48:59.587067   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:48:59.587098   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:48:59.587004   59283 retry.go:31] will retry after 754.226201ms: waiting for machine to come up
	I0421 19:49:00.343038   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:00.343479   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:49:00.343500   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:49:00.343435   59283 retry.go:31] will retry after 1.386491102s: waiting for machine to come up
	I0421 19:49:01.731580   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:01.732092   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:49:01.732135   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:49:01.732047   59283 retry.go:31] will retry after 1.24543971s: waiting for machine to come up
	I0421 19:49:02.978959   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:02.979437   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:49:02.979461   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:49:02.979387   59283 retry.go:31] will retry after 2.040881843s: waiting for machine to come up
	I0421 19:49:05.022281   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:05.022740   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:49:05.022768   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:49:05.022687   59283 retry.go:31] will retry after 2.481206172s: waiting for machine to come up
	I0421 19:49:07.505375   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:07.505850   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:49:07.505880   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:49:07.505803   59283 retry.go:31] will retry after 3.41225346s: waiting for machine to come up
	I0421 19:49:10.919515   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:10.920118   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | unable to find current IP address of domain old-k8s-version-867585 in network mk-old-k8s-version-867585
	I0421 19:49:10.920155   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | I0421 19:49:10.920058   59283 retry.go:31] will retry after 3.317996514s: waiting for machine to come up
	I0421 19:49:14.239327   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.239814   58211 main.go:141] libmachine: (old-k8s-version-867585) Found IP for machine: 192.168.50.42
	I0421 19:49:14.239838   58211 main.go:141] libmachine: (old-k8s-version-867585) Reserving static IP address...
	I0421 19:49:14.239852   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has current primary IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.240293   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "old-k8s-version-867585", mac: "52:54:00:00:e4:26", ip: "192.168.50.42"} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.240319   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | skip adding static IP to network mk-old-k8s-version-867585 - found existing host DHCP lease matching {name: "old-k8s-version-867585", mac: "52:54:00:00:e4:26", ip: "192.168.50.42"}
	I0421 19:49:14.240339   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | Getting to WaitForSSH function...
	I0421 19:49:14.240348   58211 main.go:141] libmachine: (old-k8s-version-867585) Reserved static IP address: 192.168.50.42
	I0421 19:49:14.240360   58211 main.go:141] libmachine: (old-k8s-version-867585) Waiting for SSH to be available...
	I0421 19:49:14.242576   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.242919   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.242945   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.243114   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | Using SSH client type: external
	I0421 19:49:14.243140   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa (-rw-------)
	I0421 19:49:14.243171   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:49:14.243190   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | About to run SSH command:
	I0421 19:49:14.243204   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | exit 0
	I0421 19:49:14.374277   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | SSH cmd err, output: <nil>: 
	I0421 19:49:14.374653   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetConfigRaw
	I0421 19:49:14.375196   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:49:14.378100   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.378520   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.378549   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.378836   58211 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/config.json ...
	I0421 19:49:14.379046   58211 machine.go:94] provisionDockerMachine start ...
	I0421 19:49:14.379096   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:49:14.379328   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:14.381768   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.382150   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.382188   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.382342   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:14.382495   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:14.382660   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:14.382802   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:14.382957   58211 main.go:141] libmachine: Using SSH client type: native
	I0421 19:49:14.383111   58211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:49:14.383121   58211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:49:14.494747   58211 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:49:14.494769   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetMachineName
	I0421 19:49:14.494990   58211 buildroot.go:166] provisioning hostname "old-k8s-version-867585"
	I0421 19:49:14.495021   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetMachineName
	I0421 19:49:14.495202   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:14.497995   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.498463   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.498504   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.498602   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:14.498796   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:14.498986   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:14.499152   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:14.499312   58211 main.go:141] libmachine: Using SSH client type: native
	I0421 19:49:14.499484   58211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:49:14.499505   58211 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-867585 && echo "old-k8s-version-867585" | sudo tee /etc/hostname
	I0421 19:49:14.625704   58211 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-867585
	
	I0421 19:49:14.625736   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:14.628560   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.629007   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.629042   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.629186   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:14.629361   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:14.629554   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:14.629708   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:14.629866   58211 main.go:141] libmachine: Using SSH client type: native
	I0421 19:49:14.630037   58211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:49:14.630074   58211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-867585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-867585/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-867585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:49:14.748830   58211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:49:14.748856   58211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:49:14.748888   58211 buildroot.go:174] setting up certificates
	I0421 19:49:14.748896   58211 provision.go:84] configureAuth start
	I0421 19:49:14.748910   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetMachineName
	I0421 19:49:14.749214   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:49:14.752057   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.752403   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.752429   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.752588   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:14.754752   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.755087   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.755115   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.755204   58211 provision.go:143] copyHostCerts
	I0421 19:49:14.755260   58211 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:49:14.755271   58211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:49:14.755325   58211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:49:14.755434   58211 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:49:14.755444   58211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:49:14.755475   58211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:49:14.755550   58211 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:49:14.755566   58211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:49:14.755592   58211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:49:14.755658   58211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-867585 san=[127.0.0.1 192.168.50.42 localhost minikube old-k8s-version-867585]
	I0421 19:49:14.876490   58211 provision.go:177] copyRemoteCerts
	I0421 19:49:14.876553   58211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:49:14.876593   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:14.879056   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.879353   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:14.879387   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:14.879523   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:14.879668   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:14.879830   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:14.879970   58211 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:49:14.965724   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0421 19:49:14.991490   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:49:15.016621   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:49:15.041427   58211 provision.go:87] duration metric: took 292.518321ms to configureAuth
	I0421 19:49:15.041452   58211 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:49:15.041628   58211 config.go:182] Loaded profile config "old-k8s-version-867585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0421 19:49:15.041712   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:15.044331   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.044719   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:15.044746   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.044899   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:15.045083   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:15.045236   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:15.045427   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:15.045588   58211 main.go:141] libmachine: Using SSH client type: native
	I0421 19:49:15.045785   58211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:49:15.045806   58211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:49:15.321123   58211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:49:15.321149   58211 machine.go:97] duration metric: took 942.088559ms to provisionDockerMachine
	I0421 19:49:15.321161   58211 start.go:293] postStartSetup for "old-k8s-version-867585" (driver="kvm2")
	I0421 19:49:15.321172   58211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:49:15.321189   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:49:15.321533   58211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:49:15.321568   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:15.324071   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.324428   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:15.324457   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.324630   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:15.324855   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:15.325059   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:15.325227   58211 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:49:15.410367   58211 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:49:15.414966   58211 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:49:15.414987   58211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:49:15.415042   58211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:49:15.415113   58211 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:49:15.415196   58211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:49:15.425964   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:49:15.450801   58211 start.go:296] duration metric: took 129.62939ms for postStartSetup
	I0421 19:49:15.450840   58211 fix.go:56] duration metric: took 19.895052869s for fixHost
	I0421 19:49:15.450862   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:15.453538   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.453891   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:15.453917   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.454114   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:15.454314   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:15.454505   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:15.454638   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:15.454803   58211 main.go:141] libmachine: Using SSH client type: native
	I0421 19:49:15.454991   58211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0421 19:49:15.455004   58211 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0421 19:49:15.571731   58211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713728955.538548695
	
	I0421 19:49:15.571757   58211 fix.go:216] guest clock: 1713728955.538548695
	I0421 19:49:15.571766   58211 fix.go:229] Guest: 2024-04-21 19:49:15.538548695 +0000 UTC Remote: 2024-04-21 19:49:15.450845112 +0000 UTC m=+241.076644126 (delta=87.703583ms)
	I0421 19:49:15.571790   58211 fix.go:200] guest clock delta is within tolerance: 87.703583ms
	I0421 19:49:15.571798   58211 start.go:83] releasing machines lock for "old-k8s-version-867585", held for 20.016042583s
	I0421 19:49:15.571830   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:49:15.572127   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:49:15.575264   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.575644   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:15.575672   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.575879   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:49:15.576403   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:49:15.576598   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .DriverName
	I0421 19:49:15.576681   58211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:49:15.576721   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:15.576797   58211 ssh_runner.go:195] Run: cat /version.json
	I0421 19:49:15.576830   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHHostname
	I0421 19:49:15.579688   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.579908   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.580100   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:15.580126   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.580281   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:15.580403   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:15.580425   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:15.580427   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:15.580593   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:15.580596   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHPort
	I0421 19:49:15.580754   58211 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:49:15.580774   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHKeyPath
	I0421 19:49:15.580892   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetSSHUsername
	I0421 19:49:15.581007   58211 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/old-k8s-version-867585/id_rsa Username:docker}
	I0421 19:49:15.673877   58211 ssh_runner.go:195] Run: systemctl --version
	I0421 19:49:15.700521   58211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:49:15.870372   58211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:49:15.877788   58211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:49:15.877863   58211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:49:15.903173   58211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:49:15.903204   58211 start.go:494] detecting cgroup driver to use...
	I0421 19:49:15.903290   58211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:49:15.924735   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:49:15.944057   58211 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:49:15.944119   58211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:49:15.960825   58211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:49:15.980032   58211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:49:16.133886   58211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:49:16.286979   58211 docker.go:233] disabling docker service ...
	I0421 19:49:16.287060   58211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:49:16.304768   58211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:49:16.319664   58211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:49:16.470945   58211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:49:16.621860   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:49:16.640374   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:49:16.670087   58211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0421 19:49:16.670153   58211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:49:16.688068   58211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:49:16.688150   58211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:49:16.701918   58211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:49:16.714592   58211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:49:16.730698   58211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:49:16.747440   58211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:49:16.760034   58211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:49:16.760095   58211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:49:16.777608   58211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:49:16.794724   58211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:49:16.937286   58211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:49:17.140116   58211 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:49:17.140193   58211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:49:17.147790   58211 start.go:562] Will wait 60s for crictl version
	I0421 19:49:17.147869   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:17.153077   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:49:17.197554   58211 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:49:17.197641   58211 ssh_runner.go:195] Run: crio --version
	I0421 19:49:17.242845   58211 ssh_runner.go:195] Run: crio --version
	I0421 19:49:17.280069   58211 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0421 19:49:17.281782   58211 main.go:141] libmachine: (old-k8s-version-867585) Calling .GetIP
	I0421 19:49:17.285120   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:17.285625   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e4:26", ip: ""} in network mk-old-k8s-version-867585: {Iface:virbr2 ExpiryTime:2024-04-21 20:49:08 +0000 UTC Type:0 Mac:52:54:00:00:e4:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:old-k8s-version-867585 Clientid:01:52:54:00:00:e4:26}
	I0421 19:49:17.285667   58211 main.go:141] libmachine: (old-k8s-version-867585) DBG | domain old-k8s-version-867585 has defined IP address 192.168.50.42 and MAC address 52:54:00:00:e4:26 in network mk-old-k8s-version-867585
	I0421 19:49:17.285878   58211 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0421 19:49:17.290908   58211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:49:17.306541   58211 kubeadm.go:877] updating cluster {Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:49:17.306697   58211 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 19:49:17.306754   58211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:49:17.372057   58211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0421 19:49:17.372130   58211 ssh_runner.go:195] Run: which lz4
	I0421 19:49:17.377291   58211 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0421 19:49:17.382406   58211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:49:17.382437   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0421 19:49:19.533858   58211 crio.go:462] duration metric: took 2.156598446s to copy over tarball
	I0421 19:49:19.533947   58211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 19:49:23.353909   58211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.819924688s)
	I0421 19:49:23.353945   58211 crio.go:469] duration metric: took 3.820050591s to extract the tarball
	I0421 19:49:23.353955   58211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 19:49:23.406149   58211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:49:23.496574   58211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0421 19:49:23.496600   58211 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0421 19:49:23.496682   58211 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:49:23.496710   58211 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0421 19:49:23.496716   58211 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0421 19:49:23.496677   58211 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:49:23.496762   58211 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:49:23.496693   58211 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:49:23.496749   58211 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:49:23.496935   58211 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:49:23.498365   58211 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0421 19:49:23.498377   58211 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0421 19:49:23.498380   58211 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:49:23.498406   58211 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:49:23.498417   58211 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:49:23.498429   58211 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:49:23.498410   58211 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:49:23.498389   58211 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:49:23.630639   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:49:23.634073   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:49:23.645857   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:49:23.650251   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0421 19:49:23.654325   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:49:23.655810   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0421 19:49:23.704997   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0421 19:49:23.764470   58211 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0421 19:49:23.764523   58211 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:49:23.764572   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:23.811930   58211 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0421 19:49:23.811979   58211 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:49:23.812030   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:23.866070   58211 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0421 19:49:23.866118   58211 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:49:23.866166   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:23.866204   58211 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0421 19:49:23.866165   58211 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0421 19:49:23.866227   58211 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0421 19:49:23.866260   58211 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0421 19:49:23.866297   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:23.866230   58211 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0421 19:49:23.866331   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:23.866230   58211 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:49:23.866404   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:23.871026   58211 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0421 19:49:23.871065   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0421 19:49:23.871082   58211 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0421 19:49:23.871068   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0421 19:49:23.871127   58211 ssh_runner.go:195] Run: which crictl
	I0421 19:49:23.875069   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0421 19:49:23.884262   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0421 19:49:23.884371   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0421 19:49:23.884373   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0421 19:49:24.019627   58211 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0421 19:49:24.019694   58211 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0421 19:49:24.019715   58211 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0421 19:49:24.019786   58211 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0421 19:49:24.019821   58211 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0421 19:49:24.024119   58211 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0421 19:49:24.024214   58211 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0421 19:49:24.063919   58211 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0421 19:49:24.387710   58211 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:49:24.538479   58211 cache_images.go:92] duration metric: took 1.041860092s to LoadCachedImages
	W0421 19:49:24.538574   58211 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18702-3854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0421 19:49:24.538600   58211 kubeadm.go:928] updating node { 192.168.50.42 8443 v1.20.0 crio true true} ...
	I0421 19:49:24.538757   58211 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-867585 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:49:24.538864   58211 ssh_runner.go:195] Run: crio config
	I0421 19:49:24.608583   58211 cni.go:84] Creating CNI manager for ""
	I0421 19:49:24.608621   58211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:49:24.608641   58211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:49:24.608667   58211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.42 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-867585 NodeName:old-k8s-version-867585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0421 19:49:24.608876   58211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-867585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:49:24.608965   58211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0421 19:49:24.622146   58211 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:49:24.622223   58211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 19:49:24.634331   58211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0421 19:49:24.657261   58211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:49:24.683005   58211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0421 19:49:24.706946   58211 ssh_runner.go:195] Run: grep 192.168.50.42	control-plane.minikube.internal$ /etc/hosts
	I0421 19:49:24.713225   58211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:49:24.728771   58211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:49:24.884488   58211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:49:24.905363   58211 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585 for IP: 192.168.50.42
	I0421 19:49:24.905392   58211 certs.go:194] generating shared ca certs ...
	I0421 19:49:24.905430   58211 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:49:24.905628   58211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 19:49:24.905703   58211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 19:49:24.905721   58211 certs.go:256] generating profile certs ...
	I0421 19:49:24.905862   58211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.key
	I0421 19:49:24.905972   58211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key.ada31577
	I0421 19:49:24.906038   58211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.key
	I0421 19:49:24.906268   58211 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 19:49:24.906326   58211 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 19:49:24.906338   58211 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 19:49:24.906376   58211 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 19:49:24.906421   58211 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 19:49:24.906458   58211 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 19:49:24.906528   58211 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:49:24.907465   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:49:24.952370   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:49:24.995776   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:49:25.063712   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:49:25.105067   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0421 19:49:25.139419   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:49:25.179089   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:49:25.212095   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 19:49:25.241087   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:49:25.271451   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 19:49:25.302377   58211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 19:49:25.332527   58211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:49:25.356318   58211 ssh_runner.go:195] Run: openssl version
	I0421 19:49:25.363622   58211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 19:49:25.376996   58211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 19:49:25.383955   58211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 19:49:25.384024   58211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 19:49:25.391240   58211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 19:49:25.404707   58211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 19:49:25.417329   58211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 19:49:25.423162   58211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 19:49:25.423240   58211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 19:49:25.429572   58211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:49:25.442202   58211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:49:25.456154   58211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:49:25.461435   58211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:49:25.461491   58211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:49:25.467857   58211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:49:25.480604   58211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:49:25.485966   58211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 19:49:25.492859   58211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 19:49:25.499364   58211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 19:49:25.506099   58211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 19:49:25.512911   58211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 19:49:25.519406   58211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 19:49:25.525870   58211 kubeadm.go:391] StartCluster: {Name:old-k8s-version-867585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-867585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.42 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:49:25.525964   58211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 19:49:25.526017   58211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:49:25.572792   58211 cri.go:89] found id: ""
	I0421 19:49:25.572883   58211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0421 19:49:25.585172   58211 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 19:49:25.585197   58211 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 19:49:25.585203   58211 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 19:49:25.585249   58211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 19:49:25.597095   58211 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 19:49:25.597867   58211 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-867585" does not appear in /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:49:25.598267   58211 kubeconfig.go:62] /home/jenkins/minikube-integration/18702-3854/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-867585" cluster setting kubeconfig missing "old-k8s-version-867585" context setting]
	I0421 19:49:25.598898   58211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:49:25.600286   58211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 19:49:25.611181   58211 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.42
	I0421 19:49:25.611217   58211 kubeadm.go:1154] stopping kube-system containers ...
	I0421 19:49:25.611229   58211 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0421 19:49:25.611281   58211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 19:49:25.659044   58211 cri.go:89] found id: ""
	I0421 19:49:25.659124   58211 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 19:49:25.680567   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:49:25.692305   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:49:25.692325   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:49:25.692393   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:49:25.705222   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:49:25.705276   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:49:25.717442   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:49:25.731505   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:49:25.731562   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:49:25.743985   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:49:25.756710   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:49:25.756755   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:49:25.768491   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:49:25.779488   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:49:25.779541   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:49:25.790754   58211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:49:25.802021   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:49:25.938116   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:49:27.257480   58211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.319315726s)
	I0421 19:49:27.257518   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:49:27.540072   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:49:27.649105   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 19:49:27.760545   58211 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:49:27.760661   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:28.261277   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:28.761570   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:29.261212   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:29.761229   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:30.261465   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:30.760747   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:31.261424   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:31.760774   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:32.260915   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:32.760900   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:33.261534   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:33.761145   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:34.260797   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:34.761128   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:35.261346   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:35.760871   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:36.260896   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:36.761555   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:37.261619   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:37.761697   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:38.261167   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:38.760676   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:39.261095   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:39.761115   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:40.261660   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:40.761164   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:41.261506   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:41.761385   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:42.261420   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:42.761235   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:43.261514   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:43.761268   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:44.261390   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:44.761184   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:45.261055   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:45.761055   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:46.261588   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:46.760961   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:47.261729   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:47.760920   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:48.260720   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:48.761420   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:49.261525   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:49.761230   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:50.261564   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:50.760980   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:51.261673   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:51.761710   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:52.260804   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:52.761644   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:53.261197   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:53.761690   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:54.260748   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:54.760781   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:55.261559   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:55.761636   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:56.261164   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:56.760689   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:57.261103   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:57.761797   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:58.260956   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:58.761704   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:59.261704   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:49:59.761114   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:00.261767   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:00.761186   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:01.260902   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:01.760711   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:02.261202   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:02.761011   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:03.260786   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:03.761172   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:04.261384   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:04.761640   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:05.261043   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:05.761531   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:06.261479   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:06.761316   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:07.261048   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:07.761471   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:08.261041   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:08.761655   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:09.261350   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:09.760852   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:10.260761   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:10.760795   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:11.261523   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:11.761952   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:12.261732   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:12.761761   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:13.260867   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:13.760761   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:14.261212   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:14.761311   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:15.261557   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:15.761076   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:16.261458   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:16.761683   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:17.261439   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:17.761457   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:18.261031   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:18.761013   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:19.261414   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:19.761467   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:20.261296   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:20.760730   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:21.260990   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:21.761245   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:22.260884   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:22.760997   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:23.261544   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:23.761191   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:24.261113   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:24.760929   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:25.261716   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:25.760741   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:26.261691   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:26.761162   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:27.261181   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:27.760796   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:27.760865   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:27.811412   58211 cri.go:89] found id: ""
	I0421 19:50:27.811435   58211 logs.go:276] 0 containers: []
	W0421 19:50:27.811445   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:27.811453   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:27.811512   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:27.852815   58211 cri.go:89] found id: ""
	I0421 19:50:27.852847   58211 logs.go:276] 0 containers: []
	W0421 19:50:27.852857   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:27.852864   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:27.852931   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:27.889992   58211 cri.go:89] found id: ""
	I0421 19:50:27.890023   58211 logs.go:276] 0 containers: []
	W0421 19:50:27.890035   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:27.890042   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:27.890116   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:27.929675   58211 cri.go:89] found id: ""
	I0421 19:50:27.929715   58211 logs.go:276] 0 containers: []
	W0421 19:50:27.929728   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:27.929737   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:27.929827   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:27.972858   58211 cri.go:89] found id: ""
	I0421 19:50:27.972888   58211 logs.go:276] 0 containers: []
	W0421 19:50:27.972900   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:27.972907   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:27.972996   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:28.018883   58211 cri.go:89] found id: ""
	I0421 19:50:28.018910   58211 logs.go:276] 0 containers: []
	W0421 19:50:28.018935   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:28.018943   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:28.018998   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:28.061654   58211 cri.go:89] found id: ""
	I0421 19:50:28.061678   58211 logs.go:276] 0 containers: []
	W0421 19:50:28.061686   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:28.061698   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:28.061780   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:28.103191   58211 cri.go:89] found id: ""
	I0421 19:50:28.103218   58211 logs.go:276] 0 containers: []
	W0421 19:50:28.103228   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:28.103244   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:28.103259   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:28.168724   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:28.168766   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:28.188518   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:28.188552   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:28.324864   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:28.324889   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:28.324909   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:28.392907   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:28.392942   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:30.946179   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:30.963292   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:30.963389   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:31.013475   58211 cri.go:89] found id: ""
	I0421 19:50:31.013506   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.013517   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:31.013525   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:31.013583   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:31.065510   58211 cri.go:89] found id: ""
	I0421 19:50:31.065559   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.065570   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:31.065576   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:31.065623   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:31.110044   58211 cri.go:89] found id: ""
	I0421 19:50:31.110079   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.110087   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:31.110095   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:31.110148   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:31.152843   58211 cri.go:89] found id: ""
	I0421 19:50:31.152871   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.152882   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:31.152890   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:31.152944   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:31.193872   58211 cri.go:89] found id: ""
	I0421 19:50:31.193901   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.193911   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:31.193918   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:31.193999   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:31.239375   58211 cri.go:89] found id: ""
	I0421 19:50:31.239404   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.239415   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:31.239423   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:31.239500   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:31.284034   58211 cri.go:89] found id: ""
	I0421 19:50:31.284069   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.284081   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:31.284088   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:31.284150   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:31.326875   58211 cri.go:89] found id: ""
	I0421 19:50:31.326904   58211 logs.go:276] 0 containers: []
	W0421 19:50:31.326913   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:31.326921   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:31.326931   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:31.380506   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:31.380547   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:31.396159   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:31.396188   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:31.477872   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:31.477904   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:31.477920   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:31.566874   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:31.566907   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:34.123904   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:34.138784   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:34.138880   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:34.183830   58211 cri.go:89] found id: ""
	I0421 19:50:34.183859   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.183869   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:34.183876   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:34.183935   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:34.230801   58211 cri.go:89] found id: ""
	I0421 19:50:34.230831   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.230842   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:34.230849   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:34.230909   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:34.272644   58211 cri.go:89] found id: ""
	I0421 19:50:34.272667   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.272693   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:34.272700   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:34.272755   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:34.317260   58211 cri.go:89] found id: ""
	I0421 19:50:34.317308   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.317316   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:34.317322   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:34.317379   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:34.362329   58211 cri.go:89] found id: ""
	I0421 19:50:34.362365   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.362376   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:34.362384   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:34.362446   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:34.407539   58211 cri.go:89] found id: ""
	I0421 19:50:34.407567   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.407578   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:34.407587   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:34.407651   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:34.454811   58211 cri.go:89] found id: ""
	I0421 19:50:34.454839   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.454850   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:34.454858   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:34.454936   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:34.495424   58211 cri.go:89] found id: ""
	I0421 19:50:34.495456   58211 logs.go:276] 0 containers: []
	W0421 19:50:34.495465   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:34.495476   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:34.495491   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:34.513237   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:34.513265   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:34.600435   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:34.600459   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:34.600477   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:34.676400   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:34.676426   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:34.731482   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:34.731510   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:37.288912   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:37.312212   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:37.312285   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:37.353530   58211 cri.go:89] found id: ""
	I0421 19:50:37.353557   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.353569   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:37.353577   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:37.353626   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:37.396729   58211 cri.go:89] found id: ""
	I0421 19:50:37.396805   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.396837   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:37.396848   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:37.396907   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:37.441562   58211 cri.go:89] found id: ""
	I0421 19:50:37.441592   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.441604   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:37.441611   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:37.441673   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:37.484989   58211 cri.go:89] found id: ""
	I0421 19:50:37.485012   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.485020   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:37.485025   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:37.485082   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:37.529407   58211 cri.go:89] found id: ""
	I0421 19:50:37.529436   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.529444   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:37.529449   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:37.529498   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:37.578366   58211 cri.go:89] found id: ""
	I0421 19:50:37.578399   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.578410   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:37.578429   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:37.578496   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:37.621551   58211 cri.go:89] found id: ""
	I0421 19:50:37.621578   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.621589   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:37.621613   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:37.621683   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:37.671972   58211 cri.go:89] found id: ""
	I0421 19:50:37.671999   58211 logs.go:276] 0 containers: []
	W0421 19:50:37.672008   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:37.672032   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:37.672059   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:37.735501   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:37.735534   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:37.752898   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:37.752928   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:37.835167   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:37.835196   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:37.835229   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:37.915822   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:37.915857   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:40.468998   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:40.483791   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:40.483843   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:40.521297   58211 cri.go:89] found id: ""
	I0421 19:50:40.521326   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.521336   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:40.521344   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:40.521396   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:40.564833   58211 cri.go:89] found id: ""
	I0421 19:50:40.564864   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.564885   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:40.564892   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:40.564956   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:40.604022   58211 cri.go:89] found id: ""
	I0421 19:50:40.604050   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.604059   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:40.604066   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:40.604121   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:40.649312   58211 cri.go:89] found id: ""
	I0421 19:50:40.649342   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.649351   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:40.649378   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:40.649447   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:40.690409   58211 cri.go:89] found id: ""
	I0421 19:50:40.690430   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.690437   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:40.690442   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:40.690496   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:40.734018   58211 cri.go:89] found id: ""
	I0421 19:50:40.734037   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.734045   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:40.734051   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:40.734114   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:40.782591   58211 cri.go:89] found id: ""
	I0421 19:50:40.782637   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.782647   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:40.782654   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:40.782712   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:40.829308   58211 cri.go:89] found id: ""
	I0421 19:50:40.829342   58211 logs.go:276] 0 containers: []
	W0421 19:50:40.829354   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:40.829363   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:40.829379   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:40.844206   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:40.844237   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:40.920729   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:40.920752   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:40.920766   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:40.998421   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:40.998455   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:41.041034   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:41.041071   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:43.598759   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:43.616052   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:43.616123   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:43.658846   58211 cri.go:89] found id: ""
	I0421 19:50:43.658872   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.658881   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:43.658889   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:43.658956   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:43.700983   58211 cri.go:89] found id: ""
	I0421 19:50:43.701011   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.701021   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:43.701028   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:43.701085   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:43.754336   58211 cri.go:89] found id: ""
	I0421 19:50:43.754365   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.754376   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:43.754383   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:43.754451   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:43.807355   58211 cri.go:89] found id: ""
	I0421 19:50:43.807387   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.807399   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:43.807408   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:43.807473   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:43.858771   58211 cri.go:89] found id: ""
	I0421 19:50:43.858791   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.858800   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:43.858807   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:43.858861   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:43.909300   58211 cri.go:89] found id: ""
	I0421 19:50:43.909323   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.909330   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:43.909336   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:43.909381   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:43.949644   58211 cri.go:89] found id: ""
	I0421 19:50:43.949678   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.949689   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:43.949696   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:43.949763   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:43.998302   58211 cri.go:89] found id: ""
	I0421 19:50:43.998347   58211 logs.go:276] 0 containers: []
	W0421 19:50:43.998359   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:43.998371   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:43.998391   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:44.051811   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:44.051851   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:44.068358   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:44.068400   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:44.165289   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:44.165316   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:44.165334   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:44.250688   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:44.250722   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:46.802854   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:46.821966   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:46.822044   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:46.873439   58211 cri.go:89] found id: ""
	I0421 19:50:46.873472   58211 logs.go:276] 0 containers: []
	W0421 19:50:46.873483   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:46.873492   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:46.873556   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:46.929917   58211 cri.go:89] found id: ""
	I0421 19:50:46.929946   58211 logs.go:276] 0 containers: []
	W0421 19:50:46.929957   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:46.929968   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:46.930032   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:46.983555   58211 cri.go:89] found id: ""
	I0421 19:50:46.983588   58211 logs.go:276] 0 containers: []
	W0421 19:50:46.983597   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:46.983604   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:46.983670   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:47.039497   58211 cri.go:89] found id: ""
	I0421 19:50:47.039523   58211 logs.go:276] 0 containers: []
	W0421 19:50:47.039533   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:47.039544   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:47.039607   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:47.100850   58211 cri.go:89] found id: ""
	I0421 19:50:47.100876   58211 logs.go:276] 0 containers: []
	W0421 19:50:47.100887   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:47.100893   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:47.100955   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:47.156797   58211 cri.go:89] found id: ""
	I0421 19:50:47.156826   58211 logs.go:276] 0 containers: []
	W0421 19:50:47.156836   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:47.156844   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:47.156915   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:47.213411   58211 cri.go:89] found id: ""
	I0421 19:50:47.213439   58211 logs.go:276] 0 containers: []
	W0421 19:50:47.213451   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:47.213460   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:47.213532   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:47.269548   58211 cri.go:89] found id: ""
	I0421 19:50:47.269583   58211 logs.go:276] 0 containers: []
	W0421 19:50:47.269594   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:47.269606   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:47.269622   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:47.287657   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:47.287693   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:47.394564   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:47.394590   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:47.394605   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:47.492892   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:47.492933   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:47.550630   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:47.550665   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:50.116698   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:50.133611   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:50.133712   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:50.181508   58211 cri.go:89] found id: ""
	I0421 19:50:50.181544   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.181555   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:50.181565   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:50.181637   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:50.225852   58211 cri.go:89] found id: ""
	I0421 19:50:50.225884   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.225900   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:50.225909   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:50.226018   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:50.286353   58211 cri.go:89] found id: ""
	I0421 19:50:50.286385   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.286395   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:50.286403   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:50.286464   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:50.343931   58211 cri.go:89] found id: ""
	I0421 19:50:50.343962   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.343973   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:50.343980   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:50.344041   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:50.390841   58211 cri.go:89] found id: ""
	I0421 19:50:50.390870   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.390879   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:50.390886   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:50.390955   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:50.440302   58211 cri.go:89] found id: ""
	I0421 19:50:50.440330   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.440340   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:50.440347   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:50.440395   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:50.479934   58211 cri.go:89] found id: ""
	I0421 19:50:50.479969   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.479981   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:50.479987   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:50.480049   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:50.523885   58211 cri.go:89] found id: ""
	I0421 19:50:50.523907   58211 logs.go:276] 0 containers: []
	W0421 19:50:50.523914   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:50.523922   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:50.523934   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:50.578206   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:50.578234   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:50.595773   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:50.595799   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:50.683483   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:50.683501   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:50.683513   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:50.774971   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:50.775012   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:53.329788   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:53.344997   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:53.345072   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:53.388307   58211 cri.go:89] found id: ""
	I0421 19:50:53.388340   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.388353   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:53.388363   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:53.388433   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:53.435360   58211 cri.go:89] found id: ""
	I0421 19:50:53.435439   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.435454   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:53.435468   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:53.435540   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:53.492992   58211 cri.go:89] found id: ""
	I0421 19:50:53.493022   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.493033   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:53.493040   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:53.493088   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:53.543247   58211 cri.go:89] found id: ""
	I0421 19:50:53.543326   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.543350   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:53.543363   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:53.543426   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:53.585294   58211 cri.go:89] found id: ""
	I0421 19:50:53.585322   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.585331   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:53.585343   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:53.585406   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:53.629823   58211 cri.go:89] found id: ""
	I0421 19:50:53.629847   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.629853   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:53.629860   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:53.629905   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:53.676362   58211 cri.go:89] found id: ""
	I0421 19:50:53.676392   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.676401   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:53.676408   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:53.676476   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:53.717619   58211 cri.go:89] found id: ""
	I0421 19:50:53.717653   58211 logs.go:276] 0 containers: []
	W0421 19:50:53.717664   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:53.717675   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:53.717703   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:53.772131   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:53.772170   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:53.788049   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:53.788084   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:53.874908   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:53.874931   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:53.874950   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:53.960170   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:53.960202   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:56.507350   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:56.525016   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:56.525098   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:56.573805   58211 cri.go:89] found id: ""
	I0421 19:50:56.573838   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.573849   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:56.573857   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:56.573917   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:56.622741   58211 cri.go:89] found id: ""
	I0421 19:50:56.622770   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.622780   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:56.622788   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:56.622848   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:56.665600   58211 cri.go:89] found id: ""
	I0421 19:50:56.665629   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.665640   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:56.665653   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:56.665729   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:56.712744   58211 cri.go:89] found id: ""
	I0421 19:50:56.712775   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.712785   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:56.712792   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:56.712851   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:56.759671   58211 cri.go:89] found id: ""
	I0421 19:50:56.759697   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.759705   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:56.759711   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:56.759762   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:50:56.801912   58211 cri.go:89] found id: ""
	I0421 19:50:56.801943   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.801954   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:50:56.801962   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:50:56.802025   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:50:56.847398   58211 cri.go:89] found id: ""
	I0421 19:50:56.847435   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.847444   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:50:56.847453   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:50:56.847508   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:50:56.895669   58211 cri.go:89] found id: ""
	I0421 19:50:56.895691   58211 logs.go:276] 0 containers: []
	W0421 19:50:56.895699   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:50:56.895706   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:50:56.895718   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:50:56.967622   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:50:56.967665   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:50:56.986544   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:50:56.986575   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:50:57.068916   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:50:57.068947   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:50:57.068968   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:50:57.178943   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:50:57.178981   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:50:59.727711   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:50:59.747053   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:50:59.747129   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:50:59.795210   58211 cri.go:89] found id: ""
	I0421 19:50:59.795241   58211 logs.go:276] 0 containers: []
	W0421 19:50:59.795252   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:50:59.795260   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:50:59.795317   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:50:59.849709   58211 cri.go:89] found id: ""
	I0421 19:50:59.849749   58211 logs.go:276] 0 containers: []
	W0421 19:50:59.849760   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:50:59.849767   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:50:59.849831   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:50:59.896170   58211 cri.go:89] found id: ""
	I0421 19:50:59.896194   58211 logs.go:276] 0 containers: []
	W0421 19:50:59.896202   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:50:59.896207   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:50:59.896267   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:50:59.937263   58211 cri.go:89] found id: ""
	I0421 19:50:59.937295   58211 logs.go:276] 0 containers: []
	W0421 19:50:59.937306   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:50:59.937313   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:50:59.937379   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:50:59.979502   58211 cri.go:89] found id: ""
	I0421 19:50:59.979529   58211 logs.go:276] 0 containers: []
	W0421 19:50:59.979540   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:50:59.979547   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:50:59.979611   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:00.016426   58211 cri.go:89] found id: ""
	I0421 19:51:00.016456   58211 logs.go:276] 0 containers: []
	W0421 19:51:00.016467   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:00.016474   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:00.016557   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:00.056229   58211 cri.go:89] found id: ""
	I0421 19:51:00.056261   58211 logs.go:276] 0 containers: []
	W0421 19:51:00.056271   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:00.056278   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:00.056347   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:00.097492   58211 cri.go:89] found id: ""
	I0421 19:51:00.097520   58211 logs.go:276] 0 containers: []
	W0421 19:51:00.097531   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:00.097543   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:00.097557   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:00.143727   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:00.143761   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:00.200723   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:00.200759   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:00.216224   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:00.216254   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:00.298523   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:00.298551   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:00.298565   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:02.882670   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:02.899767   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:02.899844   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:02.950279   58211 cri.go:89] found id: ""
	I0421 19:51:02.950309   58211 logs.go:276] 0 containers: []
	W0421 19:51:02.950320   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:02.950327   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:02.950387   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:02.991428   58211 cri.go:89] found id: ""
	I0421 19:51:02.991468   58211 logs.go:276] 0 containers: []
	W0421 19:51:02.991480   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:02.991487   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:02.991553   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:03.038729   58211 cri.go:89] found id: ""
	I0421 19:51:03.038769   58211 logs.go:276] 0 containers: []
	W0421 19:51:03.038781   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:03.038789   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:03.038852   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:03.087321   58211 cri.go:89] found id: ""
	I0421 19:51:03.087351   58211 logs.go:276] 0 containers: []
	W0421 19:51:03.087361   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:03.087369   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:03.087428   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:03.138256   58211 cri.go:89] found id: ""
	I0421 19:51:03.138288   58211 logs.go:276] 0 containers: []
	W0421 19:51:03.138300   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:03.138308   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:03.138371   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:03.192102   58211 cri.go:89] found id: ""
	I0421 19:51:03.192125   58211 logs.go:276] 0 containers: []
	W0421 19:51:03.192136   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:03.192143   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:03.192209   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:03.241742   58211 cri.go:89] found id: ""
	I0421 19:51:03.241774   58211 logs.go:276] 0 containers: []
	W0421 19:51:03.241787   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:03.241796   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:03.241859   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:03.294765   58211 cri.go:89] found id: ""
	I0421 19:51:03.294793   58211 logs.go:276] 0 containers: []
	W0421 19:51:03.294803   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:03.294814   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:03.294829   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:03.361212   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:03.361241   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:03.376993   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:03.377020   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:03.462412   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:03.462432   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:03.462444   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:03.550549   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:03.550577   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:06.100780   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:06.118624   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:06.118710   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:06.160947   58211 cri.go:89] found id: ""
	I0421 19:51:06.160967   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.160975   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:06.160980   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:06.161027   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:06.208885   58211 cri.go:89] found id: ""
	I0421 19:51:06.208913   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.208922   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:06.208929   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:06.209009   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:06.254820   58211 cri.go:89] found id: ""
	I0421 19:51:06.254847   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.254858   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:06.254866   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:06.254923   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:06.297238   58211 cri.go:89] found id: ""
	I0421 19:51:06.297269   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.297280   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:06.297287   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:06.297346   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:06.342701   58211 cri.go:89] found id: ""
	I0421 19:51:06.342725   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.342734   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:06.342741   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:06.342787   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:06.382582   58211 cri.go:89] found id: ""
	I0421 19:51:06.382609   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.382619   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:06.382626   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:06.382673   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:06.426773   58211 cri.go:89] found id: ""
	I0421 19:51:06.426796   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.426810   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:06.426817   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:06.427116   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:06.471098   58211 cri.go:89] found id: ""
	I0421 19:51:06.471122   58211 logs.go:276] 0 containers: []
	W0421 19:51:06.471129   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:06.471136   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:06.471150   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:06.529936   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:06.529967   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:06.547352   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:06.547386   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:06.629210   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:06.629238   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:06.629255   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:06.720502   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:06.720538   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:09.268800   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:09.285570   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:09.285645   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:09.331453   58211 cri.go:89] found id: ""
	I0421 19:51:09.331481   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.331490   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:09.331496   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:09.331542   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:09.376906   58211 cri.go:89] found id: ""
	I0421 19:51:09.376931   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.376939   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:09.376945   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:09.376988   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:09.420170   58211 cri.go:89] found id: ""
	I0421 19:51:09.420195   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.420202   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:09.420208   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:09.420259   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:09.465958   58211 cri.go:89] found id: ""
	I0421 19:51:09.465987   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.465998   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:09.466006   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:09.466083   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:09.520628   58211 cri.go:89] found id: ""
	I0421 19:51:09.520668   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.520679   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:09.520687   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:09.520747   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:09.574191   58211 cri.go:89] found id: ""
	I0421 19:51:09.574217   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.574227   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:09.574233   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:09.574286   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:09.619586   58211 cri.go:89] found id: ""
	I0421 19:51:09.619608   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.619616   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:09.619621   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:09.619665   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:09.659900   58211 cri.go:89] found id: ""
	I0421 19:51:09.659931   58211 logs.go:276] 0 containers: []
	W0421 19:51:09.659942   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:09.659953   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:09.659968   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:09.721641   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:09.721675   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:09.737594   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:09.737619   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:09.822856   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:09.822881   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:09.822894   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:09.910627   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:09.910665   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:12.461898   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:12.478188   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:12.478251   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:12.524404   58211 cri.go:89] found id: ""
	I0421 19:51:12.524427   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.524435   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:12.524440   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:12.524512   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:12.565920   58211 cri.go:89] found id: ""
	I0421 19:51:12.565949   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.565961   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:12.565968   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:12.566083   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:12.607430   58211 cri.go:89] found id: ""
	I0421 19:51:12.607456   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.607466   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:12.607473   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:12.607533   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:12.655526   58211 cri.go:89] found id: ""
	I0421 19:51:12.655559   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.655571   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:12.655578   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:12.655632   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:12.697311   58211 cri.go:89] found id: ""
	I0421 19:51:12.697344   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.697355   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:12.697362   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:12.697419   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:12.738567   58211 cri.go:89] found id: ""
	I0421 19:51:12.738598   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.738610   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:12.738618   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:12.738692   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:12.780335   58211 cri.go:89] found id: ""
	I0421 19:51:12.780366   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.780377   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:12.780385   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:12.780453   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:12.849567   58211 cri.go:89] found id: ""
	I0421 19:51:12.849596   58211 logs.go:276] 0 containers: []
	W0421 19:51:12.849607   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:12.849617   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:12.849633   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:12.935844   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:12.935874   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:12.988795   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:12.988831   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:13.046133   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:13.046181   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:13.062354   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:13.062389   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:13.154560   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:15.655366   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:15.675588   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:15.675663   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:15.724257   58211 cri.go:89] found id: ""
	I0421 19:51:15.724289   58211 logs.go:276] 0 containers: []
	W0421 19:51:15.724301   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:15.724309   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:15.724370   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:15.769947   58211 cri.go:89] found id: ""
	I0421 19:51:15.769979   58211 logs.go:276] 0 containers: []
	W0421 19:51:15.769991   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:15.769999   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:15.770076   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:15.821528   58211 cri.go:89] found id: ""
	I0421 19:51:15.821558   58211 logs.go:276] 0 containers: []
	W0421 19:51:15.821570   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:15.821578   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:15.821635   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:15.864280   58211 cri.go:89] found id: ""
	I0421 19:51:15.864310   58211 logs.go:276] 0 containers: []
	W0421 19:51:15.864321   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:15.864328   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:15.864392   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:15.904245   58211 cri.go:89] found id: ""
	I0421 19:51:15.904278   58211 logs.go:276] 0 containers: []
	W0421 19:51:15.904289   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:15.904297   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:15.904345   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:15.944572   58211 cri.go:89] found id: ""
	I0421 19:51:15.944607   58211 logs.go:276] 0 containers: []
	W0421 19:51:15.944617   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:15.944630   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:15.944697   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:15.989936   58211 cri.go:89] found id: ""
	I0421 19:51:15.990040   58211 logs.go:276] 0 containers: []
	W0421 19:51:15.990067   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:15.990079   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:15.990146   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:16.029958   58211 cri.go:89] found id: ""
	I0421 19:51:16.029991   58211 logs.go:276] 0 containers: []
	W0421 19:51:16.030003   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:16.030018   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:16.030034   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:16.128514   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:16.128554   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:16.175919   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:16.175955   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:16.232579   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:16.232618   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:16.248110   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:16.248146   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:16.346803   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:18.848013   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:18.866002   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:18.866086   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:18.920741   58211 cri.go:89] found id: ""
	I0421 19:51:18.920775   58211 logs.go:276] 0 containers: []
	W0421 19:51:18.920787   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:18.920796   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:18.920877   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:18.962243   58211 cri.go:89] found id: ""
	I0421 19:51:18.962273   58211 logs.go:276] 0 containers: []
	W0421 19:51:18.962289   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:18.962298   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:18.962360   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:19.010297   58211 cri.go:89] found id: ""
	I0421 19:51:19.010328   58211 logs.go:276] 0 containers: []
	W0421 19:51:19.010337   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:19.010344   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:19.010411   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:19.054354   58211 cri.go:89] found id: ""
	I0421 19:51:19.054381   58211 logs.go:276] 0 containers: []
	W0421 19:51:19.054392   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:19.054399   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:19.054459   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:19.093032   58211 cri.go:89] found id: ""
	I0421 19:51:19.093062   58211 logs.go:276] 0 containers: []
	W0421 19:51:19.093074   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:19.093081   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:19.093148   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:19.136650   58211 cri.go:89] found id: ""
	I0421 19:51:19.136679   58211 logs.go:276] 0 containers: []
	W0421 19:51:19.136687   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:19.136693   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:19.136741   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:19.188739   58211 cri.go:89] found id: ""
	I0421 19:51:19.188770   58211 logs.go:276] 0 containers: []
	W0421 19:51:19.188780   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:19.188788   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:19.188852   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:19.237185   58211 cri.go:89] found id: ""
	I0421 19:51:19.237221   58211 logs.go:276] 0 containers: []
	W0421 19:51:19.237232   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:19.237245   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:19.237263   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:19.339818   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:19.339849   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:19.339866   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:19.438936   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:19.438986   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:19.514863   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:19.514903   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:19.590006   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:19.590040   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:22.109943   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:22.130905   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:22.130989   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:22.184892   58211 cri.go:89] found id: ""
	I0421 19:51:22.184924   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.184937   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:22.184945   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:22.185009   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:22.233928   58211 cri.go:89] found id: ""
	I0421 19:51:22.233962   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.233974   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:22.233982   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:22.234040   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:22.274371   58211 cri.go:89] found id: ""
	I0421 19:51:22.274403   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.274410   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:22.274416   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:22.274464   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:22.312287   58211 cri.go:89] found id: ""
	I0421 19:51:22.312316   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.312326   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:22.312343   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:22.312394   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:22.352137   58211 cri.go:89] found id: ""
	I0421 19:51:22.352163   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.352170   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:22.352176   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:22.352226   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:22.389903   58211 cri.go:89] found id: ""
	I0421 19:51:22.389929   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.389938   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:22.389945   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:22.390006   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:22.428793   58211 cri.go:89] found id: ""
	I0421 19:51:22.428814   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.428822   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:22.428828   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:22.428875   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:22.467968   58211 cri.go:89] found id: ""
	I0421 19:51:22.467997   58211 logs.go:276] 0 containers: []
	W0421 19:51:22.468005   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:22.468012   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:22.468024   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:22.483492   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:22.483525   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:22.571122   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:22.571146   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:22.571161   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:22.654545   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:22.654584   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:22.699970   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:22.700003   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:25.257367   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:25.275373   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:25.275444   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:25.315455   58211 cri.go:89] found id: ""
	I0421 19:51:25.315482   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.315491   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:25.315497   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:25.315564   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:25.370341   58211 cri.go:89] found id: ""
	I0421 19:51:25.370366   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.370378   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:25.370385   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:25.370445   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:25.420787   58211 cri.go:89] found id: ""
	I0421 19:51:25.420827   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.420838   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:25.420844   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:25.420913   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:25.460154   58211 cri.go:89] found id: ""
	I0421 19:51:25.460189   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.460203   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:25.460215   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:25.460287   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:25.498984   58211 cri.go:89] found id: ""
	I0421 19:51:25.499013   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.499023   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:25.499031   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:25.499095   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:25.537368   58211 cri.go:89] found id: ""
	I0421 19:51:25.537400   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.537410   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:25.537418   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:25.537480   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:25.575147   58211 cri.go:89] found id: ""
	I0421 19:51:25.575178   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.575190   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:25.575202   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:25.575252   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:25.625500   58211 cri.go:89] found id: ""
	I0421 19:51:25.625528   58211 logs.go:276] 0 containers: []
	W0421 19:51:25.625537   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:25.625548   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:25.625560   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:25.682854   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:25.682885   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:25.699095   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:25.699129   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:25.782553   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:25.782576   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:25.782588   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:25.863212   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:25.863302   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:28.407131   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:28.422181   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:28.422253   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:28.464685   58211 cri.go:89] found id: ""
	I0421 19:51:28.464734   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.464743   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:28.464749   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:28.464806   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:28.506094   58211 cri.go:89] found id: ""
	I0421 19:51:28.506124   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.506135   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:28.506143   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:28.506205   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:28.547155   58211 cri.go:89] found id: ""
	I0421 19:51:28.547185   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.547196   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:28.547202   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:28.547263   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:28.587962   58211 cri.go:89] found id: ""
	I0421 19:51:28.587993   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.588004   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:28.588012   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:28.588072   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:28.629355   58211 cri.go:89] found id: ""
	I0421 19:51:28.629387   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.629398   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:28.629406   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:28.629458   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:28.669302   58211 cri.go:89] found id: ""
	I0421 19:51:28.669332   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.669342   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:28.669350   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:28.669413   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:28.707720   58211 cri.go:89] found id: ""
	I0421 19:51:28.707742   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.707750   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:28.707755   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:28.707808   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:28.747947   58211 cri.go:89] found id: ""
	I0421 19:51:28.747977   58211 logs.go:276] 0 containers: []
	W0421 19:51:28.747986   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:28.747994   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:28.748007   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:28.835512   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:28.835543   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:28.835559   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:28.909435   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:28.909468   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:28.960652   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:28.960678   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:29.013318   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:29.013352   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:31.530335   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:31.545824   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:31.545890   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:31.598399   58211 cri.go:89] found id: ""
	I0421 19:51:31.598429   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.598440   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:31.598447   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:31.598506   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:31.638707   58211 cri.go:89] found id: ""
	I0421 19:51:31.638738   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.638748   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:31.638755   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:31.638820   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:31.680809   58211 cri.go:89] found id: ""
	I0421 19:51:31.680842   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.680854   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:31.680862   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:31.680923   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:31.723289   58211 cri.go:89] found id: ""
	I0421 19:51:31.723327   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.723338   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:31.723346   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:31.723398   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:31.763905   58211 cri.go:89] found id: ""
	I0421 19:51:31.763940   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.763951   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:31.763959   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:31.764028   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:31.802491   58211 cri.go:89] found id: ""
	I0421 19:51:31.802530   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.802543   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:31.802551   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:31.802610   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:31.841675   58211 cri.go:89] found id: ""
	I0421 19:51:31.841706   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.841716   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:31.841730   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:31.841785   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:31.884452   58211 cri.go:89] found id: ""
	I0421 19:51:31.884488   58211 logs.go:276] 0 containers: []
	W0421 19:51:31.884500   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:31.884513   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:31.884529   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:31.969060   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:31.969096   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:32.010521   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:32.010557   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:32.064154   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:32.064185   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:32.081734   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:32.081765   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:32.175029   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:34.676014   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:34.692352   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:34.692413   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:34.741080   58211 cri.go:89] found id: ""
	I0421 19:51:34.741105   58211 logs.go:276] 0 containers: []
	W0421 19:51:34.741116   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:34.741123   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:34.741179   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:34.782570   58211 cri.go:89] found id: ""
	I0421 19:51:34.782593   58211 logs.go:276] 0 containers: []
	W0421 19:51:34.782603   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:34.782611   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:34.782665   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:34.826089   58211 cri.go:89] found id: ""
	I0421 19:51:34.826118   58211 logs.go:276] 0 containers: []
	W0421 19:51:34.826128   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:34.826135   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:34.826195   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:34.868659   58211 cri.go:89] found id: ""
	I0421 19:51:34.868688   58211 logs.go:276] 0 containers: []
	W0421 19:51:34.868700   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:34.868708   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:34.868766   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:34.905520   58211 cri.go:89] found id: ""
	I0421 19:51:34.905557   58211 logs.go:276] 0 containers: []
	W0421 19:51:34.905566   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:34.905571   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:34.905627   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:34.951015   58211 cri.go:89] found id: ""
	I0421 19:51:34.951046   58211 logs.go:276] 0 containers: []
	W0421 19:51:34.951056   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:34.951064   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:34.951130   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:34.991922   58211 cri.go:89] found id: ""
	I0421 19:51:34.991952   58211 logs.go:276] 0 containers: []
	W0421 19:51:34.991963   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:34.991970   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:34.992032   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:35.037964   58211 cri.go:89] found id: ""
	I0421 19:51:35.037992   58211 logs.go:276] 0 containers: []
	W0421 19:51:35.038002   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:35.038012   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:35.038027   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:35.095322   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:35.095359   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:35.113895   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:35.113934   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:35.200237   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:35.200261   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:35.200275   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:35.308583   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:35.308616   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:37.867355   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:37.883058   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:37.883126   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:37.921881   58211 cri.go:89] found id: ""
	I0421 19:51:37.921917   58211 logs.go:276] 0 containers: []
	W0421 19:51:37.921926   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:37.921932   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:37.921987   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:37.961052   58211 cri.go:89] found id: ""
	I0421 19:51:37.961082   58211 logs.go:276] 0 containers: []
	W0421 19:51:37.961091   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:37.961096   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:37.961157   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:38.004178   58211 cri.go:89] found id: ""
	I0421 19:51:38.004207   58211 logs.go:276] 0 containers: []
	W0421 19:51:38.004223   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:38.004230   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:38.004299   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:38.055982   58211 cri.go:89] found id: ""
	I0421 19:51:38.056018   58211 logs.go:276] 0 containers: []
	W0421 19:51:38.056032   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:38.056046   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:38.056113   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:38.106300   58211 cri.go:89] found id: ""
	I0421 19:51:38.106331   58211 logs.go:276] 0 containers: []
	W0421 19:51:38.106341   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:38.106348   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:38.106411   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:38.148438   58211 cri.go:89] found id: ""
	I0421 19:51:38.148463   58211 logs.go:276] 0 containers: []
	W0421 19:51:38.148471   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:38.148477   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:38.148532   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:38.195849   58211 cri.go:89] found id: ""
	I0421 19:51:38.195877   58211 logs.go:276] 0 containers: []
	W0421 19:51:38.195887   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:38.195894   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:38.195956   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:38.243079   58211 cri.go:89] found id: ""
	I0421 19:51:38.243111   58211 logs.go:276] 0 containers: []
	W0421 19:51:38.243125   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:38.243135   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:38.243150   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:38.260746   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:38.260782   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:38.343439   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:38.343469   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:38.343485   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:38.442234   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:38.442277   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:38.492578   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:38.492622   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:41.067298   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:41.085341   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:41.085419   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:41.138273   58211 cri.go:89] found id: ""
	I0421 19:51:41.138304   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.138314   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:41.138321   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:41.138404   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:41.183991   58211 cri.go:89] found id: ""
	I0421 19:51:41.184022   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.184032   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:41.184038   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:41.184097   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:41.236868   58211 cri.go:89] found id: ""
	I0421 19:51:41.236900   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.236908   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:41.236913   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:41.236969   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:41.287082   58211 cri.go:89] found id: ""
	I0421 19:51:41.287114   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.287123   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:41.287131   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:41.287196   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:41.330357   58211 cri.go:89] found id: ""
	I0421 19:51:41.330393   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.330404   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:41.330411   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:41.330469   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:41.372255   58211 cri.go:89] found id: ""
	I0421 19:51:41.372284   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.372295   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:41.372303   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:41.372388   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:41.417164   58211 cri.go:89] found id: ""
	I0421 19:51:41.417197   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.417208   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:41.417215   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:41.417292   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:41.466457   58211 cri.go:89] found id: ""
	I0421 19:51:41.466488   58211 logs.go:276] 0 containers: []
	W0421 19:51:41.466499   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:41.466513   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:41.466530   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:41.554114   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:41.554140   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:41.554160   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:41.668746   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:41.668788   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:41.722291   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:41.722323   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:41.789447   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:41.789482   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:44.308449   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:44.322354   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:44.322426   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:44.362379   58211 cri.go:89] found id: ""
	I0421 19:51:44.362416   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.362428   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:44.362435   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:44.362488   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:44.409086   58211 cri.go:89] found id: ""
	I0421 19:51:44.409110   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.409121   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:44.409129   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:44.409185   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:44.455449   58211 cri.go:89] found id: ""
	I0421 19:51:44.455477   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.455485   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:44.455496   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:44.455550   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:44.497500   58211 cri.go:89] found id: ""
	I0421 19:51:44.497533   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.497550   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:44.497558   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:44.497611   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:44.542478   58211 cri.go:89] found id: ""
	I0421 19:51:44.542506   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.542517   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:44.542525   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:44.542593   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:44.587201   58211 cri.go:89] found id: ""
	I0421 19:51:44.587233   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.587244   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:44.587251   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:44.587417   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:44.635440   58211 cri.go:89] found id: ""
	I0421 19:51:44.635471   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.635483   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:44.635491   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:44.635551   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:44.678860   58211 cri.go:89] found id: ""
	I0421 19:51:44.678894   58211 logs.go:276] 0 containers: []
	W0421 19:51:44.678904   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:44.678922   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:44.678940   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:44.733259   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:44.733298   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:44.751361   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:44.751392   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:44.837517   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:44.837553   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:44.837568   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:44.928210   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:44.928261   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:47.476697   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:47.491049   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:47.491126   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:47.533972   58211 cri.go:89] found id: ""
	I0421 19:51:47.534001   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.534011   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:47.534020   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:47.534087   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:47.583274   58211 cri.go:89] found id: ""
	I0421 19:51:47.583307   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.583319   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:47.583330   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:47.583398   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:47.625458   58211 cri.go:89] found id: ""
	I0421 19:51:47.625486   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.625497   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:47.625504   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:47.625563   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:47.666963   58211 cri.go:89] found id: ""
	I0421 19:51:47.666995   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.667006   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:47.667013   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:47.667074   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:47.708045   58211 cri.go:89] found id: ""
	I0421 19:51:47.708075   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.708085   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:47.708092   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:47.708155   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:47.754816   58211 cri.go:89] found id: ""
	I0421 19:51:47.754847   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.754856   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:47.754862   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:47.754916   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:47.804889   58211 cri.go:89] found id: ""
	I0421 19:51:47.804919   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.804930   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:47.804938   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:47.804992   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:47.848998   58211 cri.go:89] found id: ""
	I0421 19:51:47.849024   58211 logs.go:276] 0 containers: []
	W0421 19:51:47.849037   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:47.849047   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:47.849061   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:47.906106   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:47.906143   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:47.925659   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:47.925696   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:48.013445   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:48.013470   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:48.013481   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:48.106960   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:48.106993   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:50.654995   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:50.670022   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:50.670168   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:50.720926   58211 cri.go:89] found id: ""
	I0421 19:51:50.720954   58211 logs.go:276] 0 containers: []
	W0421 19:51:50.720965   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:50.720972   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:50.721052   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:50.770509   58211 cri.go:89] found id: ""
	I0421 19:51:50.770539   58211 logs.go:276] 0 containers: []
	W0421 19:51:50.770550   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:50.770557   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:50.770626   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:50.817422   58211 cri.go:89] found id: ""
	I0421 19:51:50.817451   58211 logs.go:276] 0 containers: []
	W0421 19:51:50.817463   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:50.817471   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:50.817537   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:50.863741   58211 cri.go:89] found id: ""
	I0421 19:51:50.863774   58211 logs.go:276] 0 containers: []
	W0421 19:51:50.863787   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:50.863796   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:50.863853   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:50.908127   58211 cri.go:89] found id: ""
	I0421 19:51:50.908152   58211 logs.go:276] 0 containers: []
	W0421 19:51:50.908166   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:50.908174   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:50.908257   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:50.952619   58211 cri.go:89] found id: ""
	I0421 19:51:50.952650   58211 logs.go:276] 0 containers: []
	W0421 19:51:50.952661   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:50.952669   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:50.952734   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:50.991717   58211 cri.go:89] found id: ""
	I0421 19:51:50.991749   58211 logs.go:276] 0 containers: []
	W0421 19:51:50.991760   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:50.991769   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:50.991830   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:51.032583   58211 cri.go:89] found id: ""
	I0421 19:51:51.032624   58211 logs.go:276] 0 containers: []
	W0421 19:51:51.032635   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:51.032646   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:51.032660   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:51.087341   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:51.087373   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:51.103835   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:51.103877   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:51.186651   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:51.186677   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:51.186691   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:51.271128   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:51.271164   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:53.820596   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:53.834017   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:53.834090   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:53.876079   58211 cri.go:89] found id: ""
	I0421 19:51:53.876115   58211 logs.go:276] 0 containers: []
	W0421 19:51:53.876126   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:53.876133   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:53.876194   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:53.916911   58211 cri.go:89] found id: ""
	I0421 19:51:53.916937   58211 logs.go:276] 0 containers: []
	W0421 19:51:53.916946   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:53.916952   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:53.916997   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:53.959084   58211 cri.go:89] found id: ""
	I0421 19:51:53.959114   58211 logs.go:276] 0 containers: []
	W0421 19:51:53.959124   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:53.959130   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:53.959186   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:53.997674   58211 cri.go:89] found id: ""
	I0421 19:51:53.997703   58211 logs.go:276] 0 containers: []
	W0421 19:51:53.997719   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:53.997727   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:53.997793   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:54.065115   58211 cri.go:89] found id: ""
	I0421 19:51:54.065141   58211 logs.go:276] 0 containers: []
	W0421 19:51:54.065149   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:54.065155   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:54.065207   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:54.105659   58211 cri.go:89] found id: ""
	I0421 19:51:54.105733   58211 logs.go:276] 0 containers: []
	W0421 19:51:54.105757   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:54.105766   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:54.105841   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:54.149462   58211 cri.go:89] found id: ""
	I0421 19:51:54.149494   58211 logs.go:276] 0 containers: []
	W0421 19:51:54.149506   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:54.149514   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:54.149576   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:54.197478   58211 cri.go:89] found id: ""
	I0421 19:51:54.197514   58211 logs.go:276] 0 containers: []
	W0421 19:51:54.197525   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:54.197536   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:54.197551   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:51:54.252697   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:54.252737   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:54.270227   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:54.270261   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:54.351457   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:54.351477   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:54.351491   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:54.434794   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:54.434827   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:56.986174   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:51:57.002378   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:51:57.002434   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:51:57.042589   58211 cri.go:89] found id: ""
	I0421 19:51:57.042624   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.042636   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:51:57.042644   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:51:57.042706   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:51:57.081788   58211 cri.go:89] found id: ""
	I0421 19:51:57.081819   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.081830   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:51:57.081836   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:51:57.081888   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:51:57.123405   58211 cri.go:89] found id: ""
	I0421 19:51:57.123431   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.123439   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:51:57.123444   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:51:57.123496   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:51:57.164492   58211 cri.go:89] found id: ""
	I0421 19:51:57.164522   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.164532   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:51:57.164537   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:51:57.164603   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:51:57.207495   58211 cri.go:89] found id: ""
	I0421 19:51:57.207528   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.207540   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:51:57.207549   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:51:57.207607   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:51:57.250339   58211 cri.go:89] found id: ""
	I0421 19:51:57.250366   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.250373   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:51:57.250379   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:51:57.250424   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:51:57.291088   58211 cri.go:89] found id: ""
	I0421 19:51:57.291118   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.291127   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:51:57.291134   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:51:57.291190   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:51:57.329346   58211 cri.go:89] found id: ""
	I0421 19:51:57.329373   58211 logs.go:276] 0 containers: []
	W0421 19:51:57.329385   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:51:57.329396   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:51:57.329412   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:51:57.344757   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:51:57.344783   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:51:57.421642   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:51:57.421662   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:51:57.421674   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:51:57.510660   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:51:57.510696   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:51:57.559100   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:51:57.559131   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:00.116958   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:00.130712   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:00.130773   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:00.169616   58211 cri.go:89] found id: ""
	I0421 19:52:00.169640   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.169662   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:00.169671   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:00.169741   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:00.215267   58211 cri.go:89] found id: ""
	I0421 19:52:00.215291   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.215299   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:00.215307   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:00.215369   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:00.257591   58211 cri.go:89] found id: ""
	I0421 19:52:00.257619   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.257631   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:00.257639   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:00.257699   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:00.294791   58211 cri.go:89] found id: ""
	I0421 19:52:00.294886   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.294904   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:00.294916   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:00.295000   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:00.334964   58211 cri.go:89] found id: ""
	I0421 19:52:00.334993   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.335003   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:00.335010   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:00.335075   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:00.381063   58211 cri.go:89] found id: ""
	I0421 19:52:00.381091   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.381102   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:00.381116   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:00.381175   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:00.434519   58211 cri.go:89] found id: ""
	I0421 19:52:00.434551   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.434561   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:00.434567   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:00.434618   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:00.489459   58211 cri.go:89] found id: ""
	I0421 19:52:00.489492   58211 logs.go:276] 0 containers: []
	W0421 19:52:00.489503   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:00.489511   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:00.489526   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:00.585037   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:00.585097   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:00.635561   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:00.635592   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:00.692280   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:00.692320   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:00.707808   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:00.707834   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:00.791730   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:03.292512   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:03.306904   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:03.306988   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:03.348057   58211 cri.go:89] found id: ""
	I0421 19:52:03.348087   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.348098   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:03.348104   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:03.348156   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:03.387629   58211 cri.go:89] found id: ""
	I0421 19:52:03.387656   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.387666   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:03.387673   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:03.387736   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:03.432517   58211 cri.go:89] found id: ""
	I0421 19:52:03.432548   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.432560   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:03.432566   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:03.432702   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:03.474402   58211 cri.go:89] found id: ""
	I0421 19:52:03.474427   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.474434   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:03.474439   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:03.474488   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:03.518030   58211 cri.go:89] found id: ""
	I0421 19:52:03.518129   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.518153   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:03.518171   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:03.518245   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:03.563594   58211 cri.go:89] found id: ""
	I0421 19:52:03.563619   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.563627   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:03.563633   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:03.563682   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:03.610403   58211 cri.go:89] found id: ""
	I0421 19:52:03.610428   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.610440   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:03.610445   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:03.610505   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:03.651656   58211 cri.go:89] found id: ""
	I0421 19:52:03.651691   58211 logs.go:276] 0 containers: []
	W0421 19:52:03.651706   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:03.651716   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:03.651737   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:03.704283   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:03.704315   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:03.719554   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:03.719587   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:03.802072   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:03.802104   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:03.802125   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:03.891549   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:03.891583   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:06.445772   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:06.466743   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:06.466820   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:06.532814   58211 cri.go:89] found id: ""
	I0421 19:52:06.532843   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.532854   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:06.532860   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:06.532930   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:06.581551   58211 cri.go:89] found id: ""
	I0421 19:52:06.581579   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.581592   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:06.581601   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:06.581659   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:06.625225   58211 cri.go:89] found id: ""
	I0421 19:52:06.625254   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.625271   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:06.625278   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:06.625337   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:06.666933   58211 cri.go:89] found id: ""
	I0421 19:52:06.666961   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.666971   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:06.666978   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:06.667043   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:06.720062   58211 cri.go:89] found id: ""
	I0421 19:52:06.720099   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.720111   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:06.720119   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:06.720183   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:06.766688   58211 cri.go:89] found id: ""
	I0421 19:52:06.766728   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.766739   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:06.766747   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:06.766812   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:06.811220   58211 cri.go:89] found id: ""
	I0421 19:52:06.811258   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.811272   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:06.811282   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:06.811347   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:06.851423   58211 cri.go:89] found id: ""
	I0421 19:52:06.851451   58211 logs.go:276] 0 containers: []
	W0421 19:52:06.851459   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:06.851468   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:06.851481   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:06.866533   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:06.866569   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:06.949051   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:06.949080   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:06.949094   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:07.033360   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:07.033392   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:07.090974   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:07.091008   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:09.648875   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:09.664461   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:09.664540   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:09.705403   58211 cri.go:89] found id: ""
	I0421 19:52:09.705424   58211 logs.go:276] 0 containers: []
	W0421 19:52:09.705432   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:09.705437   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:09.705492   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:09.745309   58211 cri.go:89] found id: ""
	I0421 19:52:09.745337   58211 logs.go:276] 0 containers: []
	W0421 19:52:09.745346   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:09.745352   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:09.745401   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:09.786622   58211 cri.go:89] found id: ""
	I0421 19:52:09.786648   58211 logs.go:276] 0 containers: []
	W0421 19:52:09.786659   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:09.786666   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:09.786719   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:09.823927   58211 cri.go:89] found id: ""
	I0421 19:52:09.823957   58211 logs.go:276] 0 containers: []
	W0421 19:52:09.823967   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:09.823974   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:09.824029   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:09.869429   58211 cri.go:89] found id: ""
	I0421 19:52:09.869457   58211 logs.go:276] 0 containers: []
	W0421 19:52:09.869466   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:09.869471   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:09.869526   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:09.911502   58211 cri.go:89] found id: ""
	I0421 19:52:09.911527   58211 logs.go:276] 0 containers: []
	W0421 19:52:09.911535   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:09.911541   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:09.911597   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:09.953769   58211 cri.go:89] found id: ""
	I0421 19:52:09.953798   58211 logs.go:276] 0 containers: []
	W0421 19:52:09.953807   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:09.953814   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:09.953879   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:10.003855   58211 cri.go:89] found id: ""
	I0421 19:52:10.003886   58211 logs.go:276] 0 containers: []
	W0421 19:52:10.003897   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:10.003909   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:10.003929   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:10.096588   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:10.096627   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:10.148456   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:10.148489   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:10.211594   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:10.211626   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:10.227603   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:10.227633   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:10.299592   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:12.800051   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:12.816453   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:12.816529   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:12.858696   58211 cri.go:89] found id: ""
	I0421 19:52:12.858725   58211 logs.go:276] 0 containers: []
	W0421 19:52:12.858736   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:12.858744   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:12.858807   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:12.901240   58211 cri.go:89] found id: ""
	I0421 19:52:12.901273   58211 logs.go:276] 0 containers: []
	W0421 19:52:12.901285   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:12.901293   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:12.901353   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:12.950623   58211 cri.go:89] found id: ""
	I0421 19:52:12.950680   58211 logs.go:276] 0 containers: []
	W0421 19:52:12.950692   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:12.950700   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:12.950762   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:12.993321   58211 cri.go:89] found id: ""
	I0421 19:52:12.993353   58211 logs.go:276] 0 containers: []
	W0421 19:52:12.993363   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:12.993374   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:12.993433   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:13.041918   58211 cri.go:89] found id: ""
	I0421 19:52:13.041946   58211 logs.go:276] 0 containers: []
	W0421 19:52:13.041956   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:13.041964   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:13.042046   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:13.081527   58211 cri.go:89] found id: ""
	I0421 19:52:13.081556   58211 logs.go:276] 0 containers: []
	W0421 19:52:13.081566   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:13.081574   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:13.081635   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:13.125583   58211 cri.go:89] found id: ""
	I0421 19:52:13.125613   58211 logs.go:276] 0 containers: []
	W0421 19:52:13.125622   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:13.125628   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:13.125695   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:13.168353   58211 cri.go:89] found id: ""
	I0421 19:52:13.168380   58211 logs.go:276] 0 containers: []
	W0421 19:52:13.168392   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:13.168402   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:13.168418   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:13.224793   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:13.224829   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:13.241963   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:13.241995   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:13.329883   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:13.329909   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:13.329928   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:13.415052   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:13.415094   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:15.963896   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:15.980511   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:15.980598   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:16.027309   58211 cri.go:89] found id: ""
	I0421 19:52:16.027338   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.027348   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:16.027355   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:16.027422   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:16.070890   58211 cri.go:89] found id: ""
	I0421 19:52:16.070919   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.070929   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:16.070935   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:16.070986   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:16.112898   58211 cri.go:89] found id: ""
	I0421 19:52:16.112918   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.112926   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:16.112931   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:16.112993   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:16.154716   58211 cri.go:89] found id: ""
	I0421 19:52:16.154743   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.154751   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:16.154756   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:16.154820   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:16.198113   58211 cri.go:89] found id: ""
	I0421 19:52:16.198141   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.198152   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:16.198160   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:16.198224   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:16.237191   58211 cri.go:89] found id: ""
	I0421 19:52:16.237221   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.237231   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:16.237239   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:16.237290   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:16.279597   58211 cri.go:89] found id: ""
	I0421 19:52:16.279629   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.279640   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:16.279648   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:16.279713   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:16.318211   58211 cri.go:89] found id: ""
	I0421 19:52:16.318240   58211 logs.go:276] 0 containers: []
	W0421 19:52:16.318249   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:16.318257   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:16.318277   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:16.373709   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:16.373742   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:16.389398   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:16.389429   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:16.463563   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:16.463585   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:16.463599   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:16.545972   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:16.546011   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:19.091756   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:19.108145   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:19.108217   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:19.148741   58211 cri.go:89] found id: ""
	I0421 19:52:19.148775   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.148786   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:19.148795   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:19.148860   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:19.188531   58211 cri.go:89] found id: ""
	I0421 19:52:19.188559   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.188571   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:19.188593   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:19.188670   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:19.230819   58211 cri.go:89] found id: ""
	I0421 19:52:19.230857   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.230868   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:19.230876   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:19.230933   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:19.277149   58211 cri.go:89] found id: ""
	I0421 19:52:19.277180   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.277188   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:19.277198   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:19.277251   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:19.324879   58211 cri.go:89] found id: ""
	I0421 19:52:19.324904   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.324914   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:19.324920   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:19.324982   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:19.369071   58211 cri.go:89] found id: ""
	I0421 19:52:19.369103   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.369114   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:19.369121   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:19.369181   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:19.412476   58211 cri.go:89] found id: ""
	I0421 19:52:19.412499   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.412507   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:19.412512   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:19.412557   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:19.460314   58211 cri.go:89] found id: ""
	I0421 19:52:19.460341   58211 logs.go:276] 0 containers: []
	W0421 19:52:19.460350   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:19.460361   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:19.460377   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:19.514749   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:19.514786   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:19.530565   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:19.530596   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:19.609588   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:19.609611   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:19.609624   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:19.693943   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:19.693972   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:22.241431   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:22.256778   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:22.256841   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:22.297300   58211 cri.go:89] found id: ""
	I0421 19:52:22.297329   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.297346   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:22.297352   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:22.297400   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:22.340351   58211 cri.go:89] found id: ""
	I0421 19:52:22.340378   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.340386   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:22.340391   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:22.340438   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:22.380041   58211 cri.go:89] found id: ""
	I0421 19:52:22.380070   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.380079   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:22.380094   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:22.380145   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:22.422098   58211 cri.go:89] found id: ""
	I0421 19:52:22.422125   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.422132   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:22.422137   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:22.422195   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:22.464564   58211 cri.go:89] found id: ""
	I0421 19:52:22.464593   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.464601   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:22.464607   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:22.464664   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:22.505194   58211 cri.go:89] found id: ""
	I0421 19:52:22.505222   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.505233   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:22.505240   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:22.505299   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:22.546749   58211 cri.go:89] found id: ""
	I0421 19:52:22.546774   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.546785   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:22.546793   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:22.546854   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:22.589171   58211 cri.go:89] found id: ""
	I0421 19:52:22.589197   58211 logs.go:276] 0 containers: []
	W0421 19:52:22.589206   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:22.589215   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:22.589227   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:22.629525   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:22.629551   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:22.681991   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:22.682021   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:22.697526   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:22.697551   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:22.777461   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:22.777480   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:22.777494   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:25.356997   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:25.372594   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:25.372668   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:25.415934   58211 cri.go:89] found id: ""
	I0421 19:52:25.415961   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.415969   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:25.415975   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:25.416035   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:25.459787   58211 cri.go:89] found id: ""
	I0421 19:52:25.459820   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.459833   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:25.459840   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:25.459904   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:25.505384   58211 cri.go:89] found id: ""
	I0421 19:52:25.505487   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.505502   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:25.505511   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:25.505599   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:25.544909   58211 cri.go:89] found id: ""
	I0421 19:52:25.544933   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.544940   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:25.544946   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:25.544996   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:25.584880   58211 cri.go:89] found id: ""
	I0421 19:52:25.584907   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.584917   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:25.584925   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:25.584977   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:25.625683   58211 cri.go:89] found id: ""
	I0421 19:52:25.625710   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.625719   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:25.625726   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:25.625772   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:25.668341   58211 cri.go:89] found id: ""
	I0421 19:52:25.668364   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.668374   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:25.668382   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:25.668432   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:25.705859   58211 cri.go:89] found id: ""
	I0421 19:52:25.705889   58211 logs.go:276] 0 containers: []
	W0421 19:52:25.705900   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:25.705910   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:25.705924   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:25.759626   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:25.759658   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:25.774630   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:25.774674   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:25.854495   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:25.854521   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:25.854537   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:25.935490   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:25.935525   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:28.476404   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:28.493654   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:28.493721   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:28.534945   58211 cri.go:89] found id: ""
	I0421 19:52:28.534970   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.534979   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:28.534987   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:28.535049   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:28.576715   58211 cri.go:89] found id: ""
	I0421 19:52:28.576746   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.576759   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:28.576766   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:28.576830   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:28.614251   58211 cri.go:89] found id: ""
	I0421 19:52:28.614274   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.614282   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:28.614297   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:28.614392   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:28.655467   58211 cri.go:89] found id: ""
	I0421 19:52:28.655492   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.655502   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:28.655510   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:28.655572   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:28.699141   58211 cri.go:89] found id: ""
	I0421 19:52:28.699166   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.699173   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:28.699179   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:28.699226   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:28.743314   58211 cri.go:89] found id: ""
	I0421 19:52:28.743340   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.743348   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:28.743353   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:28.743410   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:28.790470   58211 cri.go:89] found id: ""
	I0421 19:52:28.790514   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.790525   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:28.790533   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:28.790609   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:28.829297   58211 cri.go:89] found id: ""
	I0421 19:52:28.829323   58211 logs.go:276] 0 containers: []
	W0421 19:52:28.829334   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:28.829344   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:28.829360   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:28.844109   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:28.844136   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:28.918880   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:28.918909   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:28.918926   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:28.995716   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:28.995750   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:29.051451   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:29.051481   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:31.615395   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:31.630906   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:31.630976   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:31.675630   58211 cri.go:89] found id: ""
	I0421 19:52:31.675658   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.675669   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:31.675676   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:31.675741   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:31.718916   58211 cri.go:89] found id: ""
	I0421 19:52:31.718943   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.718953   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:31.718960   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:31.719027   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:31.760412   58211 cri.go:89] found id: ""
	I0421 19:52:31.760437   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.760448   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:31.760454   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:31.760504   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:31.801093   58211 cri.go:89] found id: ""
	I0421 19:52:31.801122   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.801134   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:31.801149   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:31.801209   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:31.842999   58211 cri.go:89] found id: ""
	I0421 19:52:31.843028   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.843039   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:31.843047   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:31.843112   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:31.882032   58211 cri.go:89] found id: ""
	I0421 19:52:31.882074   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.882105   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:31.882115   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:31.882171   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:31.922418   58211 cri.go:89] found id: ""
	I0421 19:52:31.922447   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.922458   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:31.922465   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:31.922530   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:31.961188   58211 cri.go:89] found id: ""
	I0421 19:52:31.961219   58211 logs.go:276] 0 containers: []
	W0421 19:52:31.961227   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:31.961237   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:31.961252   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:32.035882   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:32.035902   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:32.035915   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:32.115513   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:32.115555   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:32.158467   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:32.158499   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:32.215796   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:32.215826   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:34.730748   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:34.746216   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:34.746284   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:34.797392   58211 cri.go:89] found id: ""
	I0421 19:52:34.797419   58211 logs.go:276] 0 containers: []
	W0421 19:52:34.797429   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:34.797435   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:34.797498   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:34.841997   58211 cri.go:89] found id: ""
	I0421 19:52:34.842023   58211 logs.go:276] 0 containers: []
	W0421 19:52:34.842033   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:34.842040   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:34.842118   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:34.880554   58211 cri.go:89] found id: ""
	I0421 19:52:34.880584   58211 logs.go:276] 0 containers: []
	W0421 19:52:34.880596   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:34.880603   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:34.880664   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:34.919918   58211 cri.go:89] found id: ""
	I0421 19:52:34.919946   58211 logs.go:276] 0 containers: []
	W0421 19:52:34.919956   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:34.919963   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:34.920023   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:34.959110   58211 cri.go:89] found id: ""
	I0421 19:52:34.959145   58211 logs.go:276] 0 containers: []
	W0421 19:52:34.959156   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:34.959165   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:34.959231   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:35.000216   58211 cri.go:89] found id: ""
	I0421 19:52:35.000246   58211 logs.go:276] 0 containers: []
	W0421 19:52:35.000258   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:35.000267   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:35.000336   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:35.040890   58211 cri.go:89] found id: ""
	I0421 19:52:35.040920   58211 logs.go:276] 0 containers: []
	W0421 19:52:35.040931   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:35.040939   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:35.041011   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:35.081377   58211 cri.go:89] found id: ""
	I0421 19:52:35.081416   58211 logs.go:276] 0 containers: []
	W0421 19:52:35.081427   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:35.081440   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:35.081457   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:35.163900   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:35.163925   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:35.163939   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:35.248377   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:35.248413   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:35.296607   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:35.296641   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:35.351865   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:35.351901   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:37.867722   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:37.883038   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:37.883107   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:37.928360   58211 cri.go:89] found id: ""
	I0421 19:52:37.928387   58211 logs.go:276] 0 containers: []
	W0421 19:52:37.928396   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:37.928404   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:37.928468   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:37.966849   58211 cri.go:89] found id: ""
	I0421 19:52:37.966880   58211 logs.go:276] 0 containers: []
	W0421 19:52:37.966891   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:37.966898   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:37.966964   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:38.010244   58211 cri.go:89] found id: ""
	I0421 19:52:38.010269   58211 logs.go:276] 0 containers: []
	W0421 19:52:38.010277   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:38.010288   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:38.010337   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:38.055601   58211 cri.go:89] found id: ""
	I0421 19:52:38.055629   58211 logs.go:276] 0 containers: []
	W0421 19:52:38.055641   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:38.055646   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:38.055697   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:38.097249   58211 cri.go:89] found id: ""
	I0421 19:52:38.097278   58211 logs.go:276] 0 containers: []
	W0421 19:52:38.097286   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:38.097292   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:38.097342   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:38.140948   58211 cri.go:89] found id: ""
	I0421 19:52:38.140977   58211 logs.go:276] 0 containers: []
	W0421 19:52:38.140986   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:38.140992   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:38.141045   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:38.180442   58211 cri.go:89] found id: ""
	I0421 19:52:38.180468   58211 logs.go:276] 0 containers: []
	W0421 19:52:38.180478   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:38.180485   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:38.180547   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:38.219516   58211 cri.go:89] found id: ""
	I0421 19:52:38.219546   58211 logs.go:276] 0 containers: []
	W0421 19:52:38.219557   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:38.219569   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:38.219584   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:38.275666   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:38.275704   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:38.291620   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:38.291649   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:38.366411   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:38.366436   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:38.366449   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:38.452558   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:38.452592   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:41.001015   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:41.016471   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:41.016532   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:41.055975   58211 cri.go:89] found id: ""
	I0421 19:52:41.056000   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.056007   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:41.056013   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:41.056071   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:41.097390   58211 cri.go:89] found id: ""
	I0421 19:52:41.097414   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.097422   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:41.097435   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:41.097492   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:41.139612   58211 cri.go:89] found id: ""
	I0421 19:52:41.139645   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.139656   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:41.139664   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:41.139725   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:41.193386   58211 cri.go:89] found id: ""
	I0421 19:52:41.193415   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.193427   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:41.193435   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:41.193505   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:41.253947   58211 cri.go:89] found id: ""
	I0421 19:52:41.253978   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.253989   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:41.253998   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:41.254074   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:41.307562   58211 cri.go:89] found id: ""
	I0421 19:52:41.307622   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.307649   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:41.307664   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:41.307735   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:41.354441   58211 cri.go:89] found id: ""
	I0421 19:52:41.354465   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.354475   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:41.354483   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:41.354537   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:41.391757   58211 cri.go:89] found id: ""
	I0421 19:52:41.391787   58211 logs.go:276] 0 containers: []
	W0421 19:52:41.391797   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:41.391808   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:41.391826   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:41.436844   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:41.436876   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:41.489552   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:41.489583   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:41.505383   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:41.505413   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:41.583688   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:41.583711   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:41.583723   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:44.161621   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:44.176117   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:44.176178   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:44.220999   58211 cri.go:89] found id: ""
	I0421 19:52:44.221022   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.221030   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:44.221036   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:44.221080   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:44.262433   58211 cri.go:89] found id: ""
	I0421 19:52:44.262466   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.262478   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:44.262485   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:44.262549   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:44.300312   58211 cri.go:89] found id: ""
	I0421 19:52:44.300345   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.300359   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:44.300367   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:44.300434   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:44.337259   58211 cri.go:89] found id: ""
	I0421 19:52:44.337288   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.337297   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:44.337305   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:44.337362   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:44.375409   58211 cri.go:89] found id: ""
	I0421 19:52:44.375444   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.375455   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:44.375465   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:44.375534   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:44.413227   58211 cri.go:89] found id: ""
	I0421 19:52:44.413259   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.413271   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:44.413279   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:44.413348   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:44.453286   58211 cri.go:89] found id: ""
	I0421 19:52:44.453311   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.453321   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:44.453332   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:44.453397   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:44.492237   58211 cri.go:89] found id: ""
	I0421 19:52:44.492267   58211 logs.go:276] 0 containers: []
	W0421 19:52:44.492277   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:44.492287   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:44.492302   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:44.508321   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:44.508361   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:44.595121   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:44.595144   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:44.595158   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:44.687038   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:44.687072   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:44.740502   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:44.740532   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:47.294047   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:47.307395   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:47.307463   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:47.352171   58211 cri.go:89] found id: ""
	I0421 19:52:47.352201   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.352210   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:47.352215   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:47.352277   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:47.398737   58211 cri.go:89] found id: ""
	I0421 19:52:47.398770   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.398782   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:47.398790   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:47.398852   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:47.435267   58211 cri.go:89] found id: ""
	I0421 19:52:47.435303   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.435314   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:47.435323   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:47.435387   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:47.478316   58211 cri.go:89] found id: ""
	I0421 19:52:47.478346   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.478356   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:47.478362   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:47.478425   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:47.522718   58211 cri.go:89] found id: ""
	I0421 19:52:47.522746   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.522758   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:47.522765   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:47.522825   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:47.559407   58211 cri.go:89] found id: ""
	I0421 19:52:47.559433   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.559441   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:47.559448   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:47.559496   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:47.600028   58211 cri.go:89] found id: ""
	I0421 19:52:47.600058   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.600066   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:47.600071   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:47.600138   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:47.640085   58211 cri.go:89] found id: ""
	I0421 19:52:47.640120   58211 logs.go:276] 0 containers: []
	W0421 19:52:47.640138   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:47.640148   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:47.640162   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:47.687407   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:47.687440   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:47.741005   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:47.741039   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:47.757816   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:47.757850   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:47.828153   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:47.828174   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:47.828188   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:50.409992   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:50.426722   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:50.426798   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:50.467647   58211 cri.go:89] found id: ""
	I0421 19:52:50.467675   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.467686   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:50.467693   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:50.467756   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:50.508965   58211 cri.go:89] found id: ""
	I0421 19:52:50.508995   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.509009   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:50.509014   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:50.509064   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:50.545761   58211 cri.go:89] found id: ""
	I0421 19:52:50.545792   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.545804   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:50.545817   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:50.545881   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:50.587462   58211 cri.go:89] found id: ""
	I0421 19:52:50.587487   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.587495   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:50.587501   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:50.587546   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:50.629911   58211 cri.go:89] found id: ""
	I0421 19:52:50.629943   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.629954   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:50.629962   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:50.630021   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:50.671521   58211 cri.go:89] found id: ""
	I0421 19:52:50.671548   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.671556   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:50.671561   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:50.671609   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:50.714616   58211 cri.go:89] found id: ""
	I0421 19:52:50.714641   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.714649   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:50.714655   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:50.714713   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:50.756567   58211 cri.go:89] found id: ""
	I0421 19:52:50.756593   58211 logs.go:276] 0 containers: []
	W0421 19:52:50.756622   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:50.756636   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:50.756651   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:50.807967   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:50.808002   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:50.823891   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:50.823917   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:50.900935   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:50.900959   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:50.900982   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:50.982097   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:50.982132   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:53.534573   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:53.550518   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:53.550590   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:53.590345   58211 cri.go:89] found id: ""
	I0421 19:52:53.590375   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.590386   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:53.590394   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:53.590444   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:53.633403   58211 cri.go:89] found id: ""
	I0421 19:52:53.633430   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.633438   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:53.633443   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:53.633496   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:53.678955   58211 cri.go:89] found id: ""
	I0421 19:52:53.678974   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.678982   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:53.678989   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:53.679055   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:53.718862   58211 cri.go:89] found id: ""
	I0421 19:52:53.718890   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.718901   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:53.718909   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:53.718963   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:53.756246   58211 cri.go:89] found id: ""
	I0421 19:52:53.756278   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.756286   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:53.756293   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:53.756354   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:53.795819   58211 cri.go:89] found id: ""
	I0421 19:52:53.795845   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.795852   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:53.795858   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:53.795913   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:53.834078   58211 cri.go:89] found id: ""
	I0421 19:52:53.834108   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.834120   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:53.834128   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:53.834188   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:53.876398   58211 cri.go:89] found id: ""
	I0421 19:52:53.876427   58211 logs.go:276] 0 containers: []
	W0421 19:52:53.876438   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:53.876449   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:53.876463   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:53.934803   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:53.934835   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:53.952482   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:53.952518   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:54.027358   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:54.027377   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:54.027389   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:54.109794   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:54.109828   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:56.657545   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:56.676976   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:56.677049   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:56.723120   58211 cri.go:89] found id: ""
	I0421 19:52:56.723153   58211 logs.go:276] 0 containers: []
	W0421 19:52:56.723164   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:56.723172   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:56.723235   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:56.764661   58211 cri.go:89] found id: ""
	I0421 19:52:56.764691   58211 logs.go:276] 0 containers: []
	W0421 19:52:56.764701   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:56.764708   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:56.764791   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:56.815435   58211 cri.go:89] found id: ""
	I0421 19:52:56.815461   58211 logs.go:276] 0 containers: []
	W0421 19:52:56.815471   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:56.815479   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:56.815538   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:56.858128   58211 cri.go:89] found id: ""
	I0421 19:52:56.858159   58211 logs.go:276] 0 containers: []
	W0421 19:52:56.858170   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:56.858178   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:56.858252   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:52:56.900666   58211 cri.go:89] found id: ""
	I0421 19:52:56.900695   58211 logs.go:276] 0 containers: []
	W0421 19:52:56.900711   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:52:56.900719   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:52:56.900801   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:52:56.940073   58211 cri.go:89] found id: ""
	I0421 19:52:56.940102   58211 logs.go:276] 0 containers: []
	W0421 19:52:56.940111   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:52:56.940117   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:52:56.940164   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:52:56.981962   58211 cri.go:89] found id: ""
	I0421 19:52:56.981996   58211 logs.go:276] 0 containers: []
	W0421 19:52:56.982007   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:52:56.982014   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:52:56.982084   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:52:57.019028   58211 cri.go:89] found id: ""
	I0421 19:52:57.019051   58211 logs.go:276] 0 containers: []
	W0421 19:52:57.019060   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:52:57.019069   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:52:57.019084   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:52:57.062031   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:52:57.062076   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:52:57.116286   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:52:57.116314   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:52:57.132570   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:52:57.132594   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:52:57.215015   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:52:57.215036   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:52:57.215049   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:52:59.794380   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:52:59.809626   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:52:59.809708   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:52:59.850912   58211 cri.go:89] found id: ""
	I0421 19:52:59.850935   58211 logs.go:276] 0 containers: []
	W0421 19:52:59.850942   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:52:59.850948   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:52:59.850992   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:52:59.892826   58211 cri.go:89] found id: ""
	I0421 19:52:59.892863   58211 logs.go:276] 0 containers: []
	W0421 19:52:59.892873   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:52:59.892879   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:52:59.892932   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:52:59.937695   58211 cri.go:89] found id: ""
	I0421 19:52:59.937721   58211 logs.go:276] 0 containers: []
	W0421 19:52:59.937729   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:52:59.937736   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:52:59.937797   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:52:59.982442   58211 cri.go:89] found id: ""
	I0421 19:52:59.982465   58211 logs.go:276] 0 containers: []
	W0421 19:52:59.982474   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:52:59.982482   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:52:59.982537   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:00.020763   58211 cri.go:89] found id: ""
	I0421 19:53:00.020793   58211 logs.go:276] 0 containers: []
	W0421 19:53:00.020803   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:00.020809   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:00.020889   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:00.061234   58211 cri.go:89] found id: ""
	I0421 19:53:00.061266   58211 logs.go:276] 0 containers: []
	W0421 19:53:00.061274   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:00.061279   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:00.061335   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:00.098028   58211 cri.go:89] found id: ""
	I0421 19:53:00.098076   58211 logs.go:276] 0 containers: []
	W0421 19:53:00.098087   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:00.098095   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:00.098164   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:00.138271   58211 cri.go:89] found id: ""
	I0421 19:53:00.138304   58211 logs.go:276] 0 containers: []
	W0421 19:53:00.138315   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:00.138326   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:00.138342   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:00.191215   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:00.191244   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:00.207639   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:00.207682   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:00.288131   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:00.288155   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:00.288172   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:00.378294   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:00.378344   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:02.927074   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:02.942469   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:02.942533   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:02.980678   58211 cri.go:89] found id: ""
	I0421 19:53:02.980711   58211 logs.go:276] 0 containers: []
	W0421 19:53:02.980724   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:02.980732   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:02.980804   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:03.022777   58211 cri.go:89] found id: ""
	I0421 19:53:03.022812   58211 logs.go:276] 0 containers: []
	W0421 19:53:03.022825   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:03.022832   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:03.022915   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:03.064790   58211 cri.go:89] found id: ""
	I0421 19:53:03.064816   58211 logs.go:276] 0 containers: []
	W0421 19:53:03.064823   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:03.064829   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:03.064876   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:03.106455   58211 cri.go:89] found id: ""
	I0421 19:53:03.106486   58211 logs.go:276] 0 containers: []
	W0421 19:53:03.106496   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:03.106503   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:03.106568   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:03.152079   58211 cri.go:89] found id: ""
	I0421 19:53:03.152112   58211 logs.go:276] 0 containers: []
	W0421 19:53:03.152132   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:03.152140   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:03.152205   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:03.191840   58211 cri.go:89] found id: ""
	I0421 19:53:03.191866   58211 logs.go:276] 0 containers: []
	W0421 19:53:03.191875   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:03.191881   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:03.191928   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:03.234491   58211 cri.go:89] found id: ""
	I0421 19:53:03.234522   58211 logs.go:276] 0 containers: []
	W0421 19:53:03.234544   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:03.234553   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:03.234630   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:03.270253   58211 cri.go:89] found id: ""
	I0421 19:53:03.270283   58211 logs.go:276] 0 containers: []
	W0421 19:53:03.270293   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:03.270304   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:03.270318   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:03.324599   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:03.324640   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:03.341144   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:03.341176   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:03.420435   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:03.420465   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:03.420481   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:03.501765   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:03.501800   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:06.048072   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:06.062128   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:06.062206   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:06.098841   58211 cri.go:89] found id: ""
	I0421 19:53:06.098870   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.098882   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:06.098890   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:06.098973   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:06.137087   58211 cri.go:89] found id: ""
	I0421 19:53:06.137111   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.137118   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:06.137124   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:06.137177   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:06.181723   58211 cri.go:89] found id: ""
	I0421 19:53:06.181748   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.181758   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:06.181766   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:06.181822   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:06.222566   58211 cri.go:89] found id: ""
	I0421 19:53:06.222594   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.222606   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:06.222613   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:06.222684   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:06.260738   58211 cri.go:89] found id: ""
	I0421 19:53:06.260764   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.260775   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:06.260781   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:06.260847   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:06.299098   58211 cri.go:89] found id: ""
	I0421 19:53:06.299130   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.299139   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:06.299146   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:06.299210   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:06.340707   58211 cri.go:89] found id: ""
	I0421 19:53:06.340733   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.340743   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:06.340750   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:06.340824   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:06.381779   58211 cri.go:89] found id: ""
	I0421 19:53:06.381804   58211 logs.go:276] 0 containers: []
	W0421 19:53:06.381812   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:06.381819   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:06.381831   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:06.438128   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:06.438157   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:06.456229   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:06.456257   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:06.536788   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:06.536811   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:06.536825   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:06.626678   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:06.626712   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:09.179078   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:09.193733   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:09.193790   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:09.235182   58211 cri.go:89] found id: ""
	I0421 19:53:09.235212   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.235223   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:09.235231   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:09.235294   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:09.277383   58211 cri.go:89] found id: ""
	I0421 19:53:09.277412   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.277420   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:09.277426   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:09.277481   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:09.317877   58211 cri.go:89] found id: ""
	I0421 19:53:09.317908   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.317920   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:09.317927   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:09.317980   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:09.357432   58211 cri.go:89] found id: ""
	I0421 19:53:09.357460   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.357468   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:09.357473   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:09.357518   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:09.399995   58211 cri.go:89] found id: ""
	I0421 19:53:09.400023   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.400031   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:09.400036   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:09.400090   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:09.437914   58211 cri.go:89] found id: ""
	I0421 19:53:09.437948   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.437961   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:09.437968   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:09.438026   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:09.480992   58211 cri.go:89] found id: ""
	I0421 19:53:09.481030   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.481040   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:09.481048   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:09.481104   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:09.522654   58211 cri.go:89] found id: ""
	I0421 19:53:09.522686   58211 logs.go:276] 0 containers: []
	W0421 19:53:09.522698   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:09.522710   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:09.522731   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:09.574741   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:09.574774   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:09.593001   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:09.593034   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:09.670139   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:09.670163   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:09.670177   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:09.753624   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:09.753653   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:12.297076   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:12.311928   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:12.311997   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:12.349440   58211 cri.go:89] found id: ""
	I0421 19:53:12.349466   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.349477   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:12.349484   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:12.349542   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:12.387494   58211 cri.go:89] found id: ""
	I0421 19:53:12.387523   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.387535   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:12.387541   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:12.387604   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:12.433491   58211 cri.go:89] found id: ""
	I0421 19:53:12.433521   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.433532   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:12.433541   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:12.433598   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:12.474328   58211 cri.go:89] found id: ""
	I0421 19:53:12.474356   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.474365   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:12.474373   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:12.474428   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:12.509157   58211 cri.go:89] found id: ""
	I0421 19:53:12.509190   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.509199   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:12.509206   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:12.509259   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:12.544694   58211 cri.go:89] found id: ""
	I0421 19:53:12.544722   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.544731   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:12.544737   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:12.544798   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:12.580032   58211 cri.go:89] found id: ""
	I0421 19:53:12.580065   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.580078   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:12.580086   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:12.580142   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:12.619658   58211 cri.go:89] found id: ""
	I0421 19:53:12.619684   58211 logs.go:276] 0 containers: []
	W0421 19:53:12.619695   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:12.619705   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:12.619727   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:12.635319   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:12.635350   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:12.711225   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:12.711247   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:12.711260   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:12.793508   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:12.793548   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:12.840349   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:12.840377   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:15.393536   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:15.408527   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:15.408601   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:15.452459   58211 cri.go:89] found id: ""
	I0421 19:53:15.452492   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.452504   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:15.452512   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:15.452570   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:15.493513   58211 cri.go:89] found id: ""
	I0421 19:53:15.493549   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.493560   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:15.493567   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:15.493632   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:15.532218   58211 cri.go:89] found id: ""
	I0421 19:53:15.532244   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.532253   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:15.532262   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:15.532325   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:15.575531   58211 cri.go:89] found id: ""
	I0421 19:53:15.575561   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.575573   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:15.575581   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:15.575645   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:15.614383   58211 cri.go:89] found id: ""
	I0421 19:53:15.614411   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.614422   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:15.614430   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:15.614486   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:15.651733   58211 cri.go:89] found id: ""
	I0421 19:53:15.651764   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.651774   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:15.651797   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:15.651858   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:15.688376   58211 cri.go:89] found id: ""
	I0421 19:53:15.688412   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.688424   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:15.688432   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:15.688512   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:15.727334   58211 cri.go:89] found id: ""
	I0421 19:53:15.727360   58211 logs.go:276] 0 containers: []
	W0421 19:53:15.727368   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:15.727376   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:15.727388   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:15.807802   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:15.807840   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:15.866695   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:15.866736   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:15.920899   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:15.920935   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:15.936429   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:15.936463   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:16.019191   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:18.520149   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:18.537374   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:18.537439   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:18.577018   58211 cri.go:89] found id: ""
	I0421 19:53:18.577051   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.577058   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:18.577064   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:18.577111   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:18.617878   58211 cri.go:89] found id: ""
	I0421 19:53:18.617908   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.617918   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:18.617925   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:18.617990   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:18.657007   58211 cri.go:89] found id: ""
	I0421 19:53:18.657041   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.657050   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:18.657057   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:18.657116   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:18.703202   58211 cri.go:89] found id: ""
	I0421 19:53:18.703227   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.703236   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:18.703244   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:18.703304   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:18.743385   58211 cri.go:89] found id: ""
	I0421 19:53:18.743413   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.743421   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:18.743426   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:18.743477   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:18.783301   58211 cri.go:89] found id: ""
	I0421 19:53:18.783325   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.783333   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:18.783340   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:18.783387   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:18.819019   58211 cri.go:89] found id: ""
	I0421 19:53:18.819049   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.819061   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:18.819069   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:18.819125   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:18.857168   58211 cri.go:89] found id: ""
	I0421 19:53:18.857211   58211 logs.go:276] 0 containers: []
	W0421 19:53:18.857221   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:18.857230   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:18.857243   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:18.933806   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:18.933830   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:18.933843   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:19.013180   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:19.013213   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:19.059594   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:19.059630   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:19.113959   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:19.113990   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:21.629919   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:21.645955   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:21.646020   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:21.688858   58211 cri.go:89] found id: ""
	I0421 19:53:21.688883   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.688891   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:21.688896   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:21.688957   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:21.727948   58211 cri.go:89] found id: ""
	I0421 19:53:21.727975   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.727986   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:21.727992   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:21.728037   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:21.767147   58211 cri.go:89] found id: ""
	I0421 19:53:21.767172   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.767183   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:21.767190   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:21.767246   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:21.807536   58211 cri.go:89] found id: ""
	I0421 19:53:21.807560   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.807569   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:21.807574   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:21.807622   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:21.849236   58211 cri.go:89] found id: ""
	I0421 19:53:21.849261   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.849268   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:21.849274   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:21.849329   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:21.893435   58211 cri.go:89] found id: ""
	I0421 19:53:21.893463   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.893475   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:21.893483   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:21.893544   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:21.936797   58211 cri.go:89] found id: ""
	I0421 19:53:21.936826   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.936837   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:21.936844   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:21.936908   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:21.977980   58211 cri.go:89] found id: ""
	I0421 19:53:21.978005   58211 logs.go:276] 0 containers: []
	W0421 19:53:21.978012   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:21.978021   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:21.978032   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:22.032737   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:22.032781   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:22.048872   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:22.048913   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:22.137809   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:22.137834   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:22.137850   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:22.234765   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:22.234809   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:24.796300   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:24.810669   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:53:24.810742   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:53:24.846906   58211 cri.go:89] found id: ""
	I0421 19:53:24.846937   58211 logs.go:276] 0 containers: []
	W0421 19:53:24.846948   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:53:24.846956   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:53:24.847016   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:53:24.888009   58211 cri.go:89] found id: ""
	I0421 19:53:24.888045   58211 logs.go:276] 0 containers: []
	W0421 19:53:24.888058   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:53:24.888065   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:53:24.888122   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:53:24.931636   58211 cri.go:89] found id: ""
	I0421 19:53:24.931673   58211 logs.go:276] 0 containers: []
	W0421 19:53:24.931686   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:53:24.931695   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:53:24.931766   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:53:24.970182   58211 cri.go:89] found id: ""
	I0421 19:53:24.970219   58211 logs.go:276] 0 containers: []
	W0421 19:53:24.970232   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:53:24.970242   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:53:24.970318   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:53:25.016034   58211 cri.go:89] found id: ""
	I0421 19:53:25.016057   58211 logs.go:276] 0 containers: []
	W0421 19:53:25.016066   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:53:25.016072   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:53:25.016121   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:53:25.060841   58211 cri.go:89] found id: ""
	I0421 19:53:25.060872   58211 logs.go:276] 0 containers: []
	W0421 19:53:25.060883   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:53:25.060891   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:53:25.060950   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:53:25.103816   58211 cri.go:89] found id: ""
	I0421 19:53:25.103855   58211 logs.go:276] 0 containers: []
	W0421 19:53:25.103871   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:53:25.103878   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:53:25.103939   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:53:25.147813   58211 cri.go:89] found id: ""
	I0421 19:53:25.147845   58211 logs.go:276] 0 containers: []
	W0421 19:53:25.147856   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:53:25.147868   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:53:25.147881   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:53:25.225069   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:53:25.225109   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0421 19:53:25.276061   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:53:25.276089   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:53:25.329907   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:53:25.329941   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:53:25.345222   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:53:25.345256   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:53:25.418805   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:53:27.919396   58211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:53:27.936235   58211 kubeadm.go:591] duration metric: took 4m2.351020511s to restartPrimaryControlPlane
	W0421 19:53:27.936319   58211 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0421 19:53:27.936345   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:53:33.126227   58211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.189856312s)
	I0421 19:53:33.126299   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:53:33.142571   58211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:53:33.154845   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:53:33.165792   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:53:33.165815   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:53:33.165870   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:53:33.178015   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:53:33.178079   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:53:33.189574   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:53:33.201229   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:53:33.201302   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:53:33.213106   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:53:33.223644   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:53:33.223704   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:53:33.235254   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:53:33.245694   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:53:33.245746   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:53:33.256891   58211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:53:33.487611   58211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:55:29.590689   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:55:29.590767   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:55:29.592377   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:29.592430   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:29.592527   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:29.592662   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:29.592794   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:29.592892   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:29.595022   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:29.595115   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:29.595190   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:29.595263   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:29.595311   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:29.595375   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:29.595433   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:29.595520   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:29.595598   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:29.595680   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:29.595775   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:29.595824   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:29.595875   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:29.595919   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:29.595982   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:29.596047   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:29.596091   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:29.596174   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:29.596256   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:29.596301   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:29.596367   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.598820   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:29.598926   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:29.598993   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:29.599054   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:29.599162   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:29.599331   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:29.599418   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:55:29.599516   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599705   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.599772   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599936   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600041   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600191   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600244   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600389   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600481   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600654   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600669   58211 kubeadm.go:309] 
	I0421 19:55:29.600702   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:55:29.600737   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:55:29.600743   58211 kubeadm.go:309] 
	I0421 19:55:29.600777   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:55:29.600810   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:55:29.600901   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:55:29.600908   58211 kubeadm.go:309] 
	I0421 19:55:29.601009   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:55:29.601057   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:55:29.601109   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:55:29.601118   58211 kubeadm.go:309] 
	I0421 19:55:29.601224   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:55:29.601323   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:55:29.601333   58211 kubeadm.go:309] 
	I0421 19:55:29.601485   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:55:29.601579   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:55:29.601646   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:55:29.601751   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:55:29.601835   58211 kubeadm.go:309] 
	W0421 19:55:29.601862   58211 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:55:29.601908   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:55:30.075850   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:30.092432   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:55:30.103405   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:55:30.103429   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:55:30.103473   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:55:30.114018   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:55:30.114073   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:55:30.124410   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:55:30.134021   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:55:30.134076   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:55:30.143946   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.153926   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:55:30.153973   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.164013   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:55:30.173459   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:55:30.173512   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:55:30.184067   58211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:55:30.259108   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:30.259195   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:30.422144   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:30.422317   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:30.422497   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:30.619194   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:30.621135   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:30.621258   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:30.621314   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:30.621437   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:30.621530   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:30.621617   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:30.621956   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:30.622478   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:30.623068   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:30.623509   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:30.624072   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:30.624110   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:30.624183   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:30.871049   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:30.931466   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:31.088680   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:31.275358   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:31.305344   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:31.307220   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:31.307289   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:31.484365   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:31.486164   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:31.486312   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:31.492868   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:31.494787   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:31.496104   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:31.500190   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:56:11.503250   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:56:11.503361   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:11.503618   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:16.504469   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:16.504743   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:26.505496   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:26.505769   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:46.505851   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:46.506140   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:57:26.505043   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:57:26.505356   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:57:26.505385   58211 kubeadm.go:309] 
	I0421 19:57:26.505436   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:57:26.505495   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:57:26.505505   58211 kubeadm.go:309] 
	I0421 19:57:26.505553   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:57:26.505596   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:57:26.505720   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:57:26.505730   58211 kubeadm.go:309] 
	I0421 19:57:26.505839   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:57:26.505883   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:57:26.505912   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:57:26.505919   58211 kubeadm.go:309] 
	I0421 19:57:26.506020   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:57:26.506152   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:57:26.506181   58211 kubeadm.go:309] 
	I0421 19:57:26.506346   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:57:26.506480   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:57:26.506581   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:57:26.506702   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:57:26.506721   58211 kubeadm.go:309] 
	I0421 19:57:26.507115   58211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:57:26.507237   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:57:26.507330   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:57:26.507409   58211 kubeadm.go:393] duration metric: took 8m0.981544676s to StartCluster
	I0421 19:57:26.507461   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:57:26.507523   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:57:26.556647   58211 cri.go:89] found id: ""
	I0421 19:57:26.556676   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.556687   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:57:26.556695   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:57:26.556748   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:57:26.595025   58211 cri.go:89] found id: ""
	I0421 19:57:26.595055   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.595064   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:57:26.595069   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:57:26.595143   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:57:26.634084   58211 cri.go:89] found id: ""
	I0421 19:57:26.634115   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.634126   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:57:26.634134   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:57:26.634201   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:57:26.672409   58211 cri.go:89] found id: ""
	I0421 19:57:26.672439   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.672450   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:57:26.672458   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:57:26.672515   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:57:26.720123   58211 cri.go:89] found id: ""
	I0421 19:57:26.720151   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.720159   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:57:26.720165   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:57:26.720219   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:57:26.756889   58211 cri.go:89] found id: ""
	I0421 19:57:26.756918   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.756929   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:57:26.756936   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:57:26.757044   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:57:26.802160   58211 cri.go:89] found id: ""
	I0421 19:57:26.802188   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.802197   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:57:26.802204   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:57:26.802264   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:57:26.841543   58211 cri.go:89] found id: ""
	I0421 19:57:26.841567   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.841574   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:57:26.841583   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:57:26.841598   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:57:26.894547   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:57:26.894575   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:57:26.909052   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:57:26.909077   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:57:27.002127   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:57:27.002150   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:57:27.002166   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:57:27.120460   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:57:27.120494   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0421 19:57:27.170858   58211 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:57:27.170914   58211 out.go:239] * 
	* 
	W0421 19:57:27.170969   58211 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.170990   58211 out.go:239] * 
	* 
	W0421 19:57:27.171868   58211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:57:27.174893   58211 out.go:177] 
	W0421 19:57:27.176215   58211 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.176288   58211 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:57:27.176319   58211 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:57:27.177779   58211 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-867585 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (251.330206ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-867585 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-867585        | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-167454       | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC | 21 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-597568                  | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC | 21 Apr 24 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:47 UTC |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-364614             | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-364614                  | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-364614 image list                           | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-411651 | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | disable-driver-mounts-411651                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-727235            | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC | 21 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-727235                 | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:54:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:54:52.830637   62197 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:54:52.830912   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.830926   62197 out.go:304] Setting ErrFile to fd 2...
	I0421 19:54:52.830932   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.831126   62197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:54:52.831742   62197 out.go:298] Setting JSON to false
	I0421 19:54:52.832674   62197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5791,"bootTime":1713723502,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:54:52.832739   62197 start.go:139] virtualization: kvm guest
	I0421 19:54:52.835455   62197 out.go:177] * [embed-certs-727235] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:54:52.837412   62197 notify.go:220] Checking for updates...
	I0421 19:54:52.837418   62197 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:54:52.839465   62197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:54:52.841250   62197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:54:52.842894   62197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:54:52.844479   62197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:54:52.845967   62197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:54:52.847931   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:54:52.848387   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.848464   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.864769   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0421 19:54:52.865105   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.865623   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.865642   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.865919   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.866109   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.866305   62197 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:54:52.866589   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.866622   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.880497   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0421 19:54:52.880874   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.881355   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.881380   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.881691   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.881883   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.916395   62197 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:54:52.917730   62197 start.go:297] selected driver: kvm2
	I0421 19:54:52.917753   62197 start.go:901] validating driver "kvm2" against &{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.917858   62197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:54:52.918512   62197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.918585   62197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:54:52.933446   62197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:54:52.933791   62197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:54:52.933845   62197 cni.go:84] Creating CNI manager for ""
	I0421 19:54:52.933858   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:54:52.933901   62197 start.go:340] cluster config:
	{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.933981   62197 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.936907   62197 out.go:177] * Starting "embed-certs-727235" primary control-plane node in "embed-certs-727235" cluster
	I0421 19:54:52.938596   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:54:52.938626   62197 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:54:52.938633   62197 cache.go:56] Caching tarball of preloaded images
	I0421 19:54:52.938690   62197 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:54:52.938701   62197 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 19:54:52.938791   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:54:52.938960   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:54:52.938995   62197 start.go:364] duration metric: took 19.691µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:54:52.939006   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:54:52.939011   62197 fix.go:54] fixHost starting: 
	I0421 19:54:52.939248   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.939274   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.953191   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0421 19:54:52.953602   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.953994   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.954024   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.954454   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.954602   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.954750   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:54:52.956153   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Running err=<nil>
	W0421 19:54:52.956167   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:54:52.958195   62197 out.go:177] * Updating the running kvm2 "embed-certs-727235" VM ...
	I0421 19:54:52.959459   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:54:52.959476   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.959678   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:54:52.961705   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962141   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:51:24 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:54:52.962165   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962245   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:54:52.962392   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962555   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962682   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:54:52.962853   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:54:52.963028   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:54:52.963038   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:54:55.842410   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:58.070842   57617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.405000958s)
	I0421 19:54:58.070936   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:54:58.089413   57617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:54:58.101786   57617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:54:58.114021   57617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:54:58.114065   57617 kubeadm.go:156] found existing configuration files:
	
	I0421 19:54:58.114126   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0421 19:54:58.124228   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:54:58.124296   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:54:58.135037   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0421 19:54:58.144890   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:54:58.144958   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:54:58.155403   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.165155   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:54:58.165207   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.175703   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0421 19:54:58.185428   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:54:58.185521   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:54:58.195328   57617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:54:58.257787   57617 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:54:58.257868   57617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:54:58.432626   57617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:54:58.432766   57617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:54:58.432943   57617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:54:58.677807   57617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:54:58.679655   57617 out.go:204]   - Generating certificates and keys ...
	I0421 19:54:58.679763   57617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:54:58.679856   57617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:54:58.679974   57617 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:54:58.680053   57617 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:54:58.680125   57617 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:54:58.680177   57617 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:54:58.681691   57617 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:54:58.682034   57617 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:54:58.682257   57617 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:54:58.682547   57617 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:54:58.682770   57617 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:54:58.682840   57617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:54:58.938223   57617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:54:58.989244   57617 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:54:59.196060   57617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:54:59.378330   57617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:54:59.435654   57617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:54:59.436159   57617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:54:59.440839   57617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:54:58.914303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:59.442694   57617 out.go:204]   - Booting up control plane ...
	I0421 19:54:59.442826   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:54:59.442942   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:54:59.443122   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:54:59.466298   57617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:54:59.469370   57617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:54:59.469656   57617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:54:59.622281   57617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:54:59.622433   57617 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:55:00.123513   57617 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.401309ms
	I0421 19:55:00.123606   57617 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:55:05.627324   57617 kubeadm.go:309] [api-check] The API server is healthy after 5.503528473s
	I0421 19:55:05.644392   57617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:55:05.666212   57617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:55:05.696150   57617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:55:05.696487   57617 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-167454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:55:05.709873   57617 kubeadm.go:309] [bootstrap-token] Using token: ypxtpg.5u6l3v2as04iv2aj
	I0421 19:55:05.711407   57617 out.go:204]   - Configuring RBAC rules ...
	I0421 19:55:05.711556   57617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:55:05.721552   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:55:05.735168   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:55:05.739580   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:55:05.743466   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:55:05.747854   57617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:55:06.034775   57617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:55:06.468585   57617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:55:07.036924   57617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:55:07.036983   57617 kubeadm.go:309] 
	I0421 19:55:07.037040   57617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:55:07.037060   57617 kubeadm.go:309] 
	I0421 19:55:07.037199   57617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:55:07.037218   57617 kubeadm.go:309] 
	I0421 19:55:07.037259   57617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:55:07.037348   57617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:55:07.037419   57617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:55:07.037433   57617 kubeadm.go:309] 
	I0421 19:55:07.037526   57617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:55:07.037540   57617 kubeadm.go:309] 
	I0421 19:55:07.037604   57617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:55:07.037615   57617 kubeadm.go:309] 
	I0421 19:55:07.037681   57617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:55:07.037760   57617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:55:07.037823   57617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:55:07.037828   57617 kubeadm.go:309] 
	I0421 19:55:07.037899   57617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:55:07.037964   57617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:55:07.037970   57617 kubeadm.go:309] 
	I0421 19:55:07.038098   57617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038255   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 19:55:07.038283   57617 kubeadm.go:309] 	--control-plane 
	I0421 19:55:07.038288   57617 kubeadm.go:309] 
	I0421 19:55:07.038400   57617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:55:07.038411   57617 kubeadm.go:309] 
	I0421 19:55:07.038517   57617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038672   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 19:55:07.038956   57617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:55:07.038982   57617 cni.go:84] Creating CNI manager for ""
	I0421 19:55:07.038998   57617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:55:07.040852   57617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 19:55:04.994338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:07.042257   57617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 19:55:07.057287   57617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 19:55:07.078228   57617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:55:07.078330   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.078390   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167454 minikube.k8s.io/updated_at=2024_04_21T19_55_07_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=default-k8s-diff-port-167454 minikube.k8s.io/primary=true
	I0421 19:55:07.128726   57617 ops.go:34] apiserver oom_adj: -16
	I0421 19:55:07.277531   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.778563   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.066312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:08.278441   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.778051   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.277768   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.777868   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.278602   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.777607   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.278260   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.777609   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.277684   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.778116   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.146347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:17.218265   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:13.278439   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:13.777901   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.278214   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.777957   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.278369   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.778113   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.277991   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.778322   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.278350   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.778144   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.278465   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.778049   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.278228   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.777615   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.945015   57617 kubeadm.go:1107] duration metric: took 12.866746923s to wait for elevateKubeSystemPrivileges
	W0421 19:55:19.945062   57617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:55:19.945073   57617 kubeadm.go:393] duration metric: took 5m11.113256567s to StartCluster
	I0421 19:55:19.945094   57617 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.945186   57617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:55:19.947618   57617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.947919   57617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.23 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:55:19.949819   57617 out.go:177] * Verifying Kubernetes components...
	I0421 19:55:19.947983   57617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:55:19.948132   57617 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:55:19.951664   57617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:55:19.951671   57617 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951685   57617 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951708   57617 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951718   57617 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-167454"
	I0421 19:55:19.951720   57617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167454"
	W0421 19:55:19.951730   57617 addons.go:243] addon storage-provisioner should already be in state true
	I0421 19:55:19.951741   57617 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.951753   57617 addons.go:243] addon metrics-server should already be in state true
	I0421 19:55:19.951766   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.951781   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.952059   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952095   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952147   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952169   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952170   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952378   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.969767   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0421 19:55:19.970291   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.971023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.971053   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.971517   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.971747   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.971966   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0421 19:55:19.972325   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0421 19:55:19.972539   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.972691   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.973050   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973075   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973313   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973336   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973408   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973712   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973986   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974023   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.974287   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974321   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.976061   57617 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.976086   57617 addons.go:243] addon default-storageclass should already be in state true
	I0421 19:55:19.976116   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.976473   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.976513   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.989851   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I0421 19:55:19.990053   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0421 19:55:19.990494   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.990573   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.991023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991039   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991170   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991197   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991380   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991527   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991556   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.991713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.993398   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995704   57617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:55:19.994181   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995594   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0421 19:55:19.997429   57617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:19.997450   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:55:19.997470   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:19.998995   57617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0421 19:55:19.997642   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.000129   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000728   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.000743   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000638   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.000805   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 19:55:20.000816   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 19:55:20.000826   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.000991   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.001147   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.001328   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.001340   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.001362   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.001763   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.002313   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:20.002335   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:20.003803   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004388   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.004404   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004602   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.004792   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.004958   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.005128   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.018016   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0421 19:55:20.018651   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.019177   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.019196   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.019422   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.019702   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:20.021066   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:20.021324   57617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.021340   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:55:20.021357   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.024124   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024503   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.024524   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024686   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.024848   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.025030   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.025184   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.214689   57617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:55:20.264530   57617 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.281976   57617 node_ready.go:49] node "default-k8s-diff-port-167454" has status "Ready":"True"
	I0421 19:55:20.281999   57617 node_ready.go:38] duration metric: took 17.434628ms for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.282007   57617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:20.297108   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:20.386102   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.408686   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 19:55:20.408706   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0421 19:55:20.416022   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:20.455756   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 19:55:20.455778   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 19:55:20.603535   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.603559   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 19:55:20.690543   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.842718   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.842753   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843074   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843148   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843163   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.843172   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.843191   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843475   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843511   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843525   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.856272   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.856294   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.856618   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.856636   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.856673   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550249   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13418491s)
	I0421 19:55:21.550297   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550305   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550577   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550654   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:21.550663   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550675   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550684   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550928   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550946   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.853935   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.853970   57617 pod_ready.go:81] duration metric: took 1.556832657s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.853984   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924815   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.924845   57617 pod_ready.go:81] duration metric: took 70.852928ms for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924857   57617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955217   57617 pod_ready.go:92] pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.955246   57617 pod_ready.go:81] duration metric: took 30.380253ms for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955259   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975065   57617 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.975094   57617 pod_ready.go:81] duration metric: took 19.818959ms for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975106   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981884   57617 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.981907   57617 pod_ready.go:81] duration metric: took 6.791796ms for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981919   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.001934   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311352362s)
	I0421 19:55:22.001984   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002000   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002311   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002369   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002330   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.002410   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002434   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002649   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002689   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002705   57617 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-167454"
	I0421 19:55:22.002713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.005036   57617 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0421 19:55:22.006362   57617 addons.go:505] duration metric: took 2.058380621s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0421 19:55:22.269772   57617 pod_ready.go:92] pod "kube-proxy-wmv4v" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.269798   57617 pod_ready.go:81] duration metric: took 287.872366ms for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.269808   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668470   57617 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.668494   57617 pod_ready.go:81] duration metric: took 398.679544ms for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668502   57617 pod_ready.go:38] duration metric: took 2.386486578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:22.668516   57617 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:55:22.668560   57617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:55:22.688191   57617 api_server.go:72] duration metric: took 2.740229162s to wait for apiserver process to appear ...
	I0421 19:55:22.688224   57617 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:55:22.688244   57617 api_server.go:253] Checking apiserver healthz at https://192.168.61.23:8444/healthz ...
	I0421 19:55:22.699424   57617 api_server.go:279] https://192.168.61.23:8444/healthz returned 200:
	ok
	I0421 19:55:22.700614   57617 api_server.go:141] control plane version: v1.30.0
	I0421 19:55:22.700636   57617 api_server.go:131] duration metric: took 12.404937ms to wait for apiserver health ...
	I0421 19:55:22.700643   57617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:55:22.873594   57617 system_pods.go:59] 9 kube-system pods found
	I0421 19:55:22.873622   57617 system_pods.go:61] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:22.873631   57617 system_pods.go:61] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:22.873635   57617 system_pods.go:61] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:22.873639   57617 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:22.873643   57617 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:22.873647   57617 system_pods.go:61] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:22.873651   57617 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:22.873657   57617 system_pods.go:61] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:22.873698   57617 system_pods.go:61] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:22.873717   57617 system_pods.go:74] duration metric: took 173.068164ms to wait for pod list to return data ...
	I0421 19:55:22.873731   57617 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:55:23.068026   57617 default_sa.go:45] found service account: "default"
	I0421 19:55:23.068053   57617 default_sa.go:55] duration metric: took 194.313071ms for default service account to be created ...
	I0421 19:55:23.068064   57617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:55:23.272118   57617 system_pods.go:86] 9 kube-system pods found
	I0421 19:55:23.272148   57617 system_pods.go:89] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:23.272156   57617 system_pods.go:89] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:23.272162   57617 system_pods.go:89] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:23.272168   57617 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:23.272173   57617 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:23.272178   57617 system_pods.go:89] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:23.272184   57617 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:23.272194   57617 system_pods.go:89] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:23.272200   57617 system_pods.go:89] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:23.272212   57617 system_pods.go:126] duration metric: took 204.142116ms to wait for k8s-apps to be running ...
	I0421 19:55:23.272231   57617 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:55:23.272283   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:23.288800   57617 system_svc.go:56] duration metric: took 16.572799ms WaitForService to wait for kubelet
	I0421 19:55:23.288829   57617 kubeadm.go:576] duration metric: took 3.340874079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:55:23.288851   57617 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:55:23.469503   57617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:55:23.469541   57617 node_conditions.go:123] node cpu capacity is 2
	I0421 19:55:23.469554   57617 node_conditions.go:105] duration metric: took 180.696423ms to run NodePressure ...
	I0421 19:55:23.469567   57617 start.go:240] waiting for startup goroutines ...
	I0421 19:55:23.469576   57617 start.go:245] waiting for cluster config update ...
	I0421 19:55:23.469590   57617 start.go:254] writing updated cluster config ...
	I0421 19:55:23.469941   57617 ssh_runner.go:195] Run: rm -f paused
	I0421 19:55:23.521989   57617 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:55:23.524030   57617 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-167454" cluster and "default" namespace by default
	I0421 19:55:23.298271   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:29.590689   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:55:29.590767   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:55:29.592377   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:29.592430   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:29.592527   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:29.592662   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:29.592794   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:29.592892   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:29.595022   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:29.595115   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:29.595190   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:29.595263   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:29.595311   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:29.595375   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:29.595433   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:29.595520   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:29.595598   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:29.595680   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:29.595775   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:29.595824   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:29.595875   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:29.595919   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:29.595982   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:29.596047   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:29.596091   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:29.596174   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:29.596256   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:29.596301   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:29.596367   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.598820   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:29.598926   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:29.598993   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:29.599054   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:29.599162   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:29.599331   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:29.599418   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:55:29.599516   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599705   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.599772   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599936   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600041   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600191   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600244   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600389   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600481   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600654   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600669   58211 kubeadm.go:309] 
	I0421 19:55:29.600702   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:55:29.600737   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:55:29.600743   58211 kubeadm.go:309] 
	I0421 19:55:29.600777   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:55:29.600810   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:55:29.600901   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:55:29.600908   58211 kubeadm.go:309] 
	I0421 19:55:29.601009   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:55:29.601057   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:55:29.601109   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:55:29.601118   58211 kubeadm.go:309] 
	I0421 19:55:29.601224   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:55:29.601323   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:55:29.601333   58211 kubeadm.go:309] 
	I0421 19:55:29.601485   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:55:29.601579   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:55:29.601646   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:55:29.601751   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:55:29.601835   58211 kubeadm.go:309] 
	W0421 19:55:29.601862   58211 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:55:29.601908   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:55:30.075850   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:30.092432   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:55:30.103405   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:55:30.103429   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:55:30.103473   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:55:30.114018   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:55:30.114073   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:55:30.124410   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:55:30.134021   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:55:30.134076   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:55:30.143946   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.153926   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:55:30.153973   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.164013   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:55:30.173459   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:55:30.173512   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:55:30.184067   58211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:55:30.259108   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:30.259195   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:30.422144   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:30.422317   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:30.422497   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:30.619194   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:30.621135   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:30.621258   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:30.621314   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:30.621437   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:30.621530   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:30.621617   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:30.621956   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:30.622478   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:30.623068   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:30.623509   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:30.624072   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:30.624110   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:30.624183   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:30.871049   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:30.931466   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:31.088680   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:31.275358   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:31.305344   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:31.307220   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:31.307289   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:31.484365   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.378329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:32.450259   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:31.486164   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:31.486312   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:31.492868   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:31.494787   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:31.496104   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:31.500190   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:38.530370   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:41.602365   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:47.682316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:50.754312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:56.834318   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:59.906313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:05.986294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:09.058300   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:11.503250   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:56:11.503361   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:11.503618   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:15.138313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:16.504469   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:16.504743   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:18.210376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:24.290344   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:27.366276   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:26.505496   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:26.505769   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:33.442294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:36.514319   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:42.594275   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:45.670298   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:46.505851   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:46.506140   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:51.746306   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:54.818338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:00.898357   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:03.974324   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:10.050360   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:13.122376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:19.202341   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:22.274304   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:26.505043   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:57:26.505356   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:57:26.505385   58211 kubeadm.go:309] 
	I0421 19:57:26.505436   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:57:26.505495   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:57:26.505505   58211 kubeadm.go:309] 
	I0421 19:57:26.505553   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:57:26.505596   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:57:26.505720   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:57:26.505730   58211 kubeadm.go:309] 
	I0421 19:57:26.505839   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:57:26.505883   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:57:26.505912   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:57:26.505919   58211 kubeadm.go:309] 
	I0421 19:57:26.506020   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:57:26.506152   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:57:26.506181   58211 kubeadm.go:309] 
	I0421 19:57:26.506346   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:57:26.506480   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:57:26.506581   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:57:26.506702   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:57:26.506721   58211 kubeadm.go:309] 
	I0421 19:57:26.507115   58211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:57:26.507237   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:57:26.507330   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:57:26.507409   58211 kubeadm.go:393] duration metric: took 8m0.981544676s to StartCluster
	I0421 19:57:26.507461   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:57:26.507523   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:57:26.556647   58211 cri.go:89] found id: ""
	I0421 19:57:26.556676   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.556687   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:57:26.556695   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:57:26.556748   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:57:26.595025   58211 cri.go:89] found id: ""
	I0421 19:57:26.595055   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.595064   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:57:26.595069   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:57:26.595143   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:57:26.634084   58211 cri.go:89] found id: ""
	I0421 19:57:26.634115   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.634126   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:57:26.634134   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:57:26.634201   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:57:26.672409   58211 cri.go:89] found id: ""
	I0421 19:57:26.672439   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.672450   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:57:26.672458   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:57:26.672515   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:57:26.720123   58211 cri.go:89] found id: ""
	I0421 19:57:26.720151   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.720159   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:57:26.720165   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:57:26.720219   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:57:26.756889   58211 cri.go:89] found id: ""
	I0421 19:57:26.756918   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.756929   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:57:26.756936   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:57:26.757044   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:57:26.802160   58211 cri.go:89] found id: ""
	I0421 19:57:26.802188   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.802197   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:57:26.802204   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:57:26.802264   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:57:26.841543   58211 cri.go:89] found id: ""
	I0421 19:57:26.841567   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.841574   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:57:26.841583   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:57:26.841598   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:57:26.894547   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:57:26.894575   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:57:26.909052   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:57:26.909077   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:57:27.002127   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:57:27.002150   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:57:27.002166   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:57:27.120460   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:57:27.120494   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0421 19:57:27.170858   58211 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:57:27.170914   58211 out.go:239] * 
	W0421 19:57:27.170969   58211 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.170990   58211 out.go:239] * 
	W0421 19:57:27.171868   58211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:57:27.174893   58211 out.go:177] 
	W0421 19:57:27.176215   58211 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.176288   58211 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:57:27.176319   58211 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:57:27.177779   58211 out.go:177] 
	
	
	==> CRI-O <==
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.165307046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729448165271151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9420f322-9155-408a-9623-e6301ab4b194 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.166991349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5235430-d552-45c1-92bd-30e9b45872b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.167232415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5235430-d552-45c1-92bd-30e9b45872b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.167468139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d5235430-d552-45c1-92bd-30e9b45872b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.215314351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2da74337-48f2-4ed7-a337-44061fa608e5 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.215393047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2da74337-48f2-4ed7-a337-44061fa608e5 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.216738917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa89f7a6-8c32-4872-b273-af75f28492e1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.217176668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729448217096906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa89f7a6-8c32-4872-b273-af75f28492e1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.217753308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99866f4f-d19c-4007-b587-aa22d2968ce1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.217831774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99866f4f-d19c-4007-b587-aa22d2968ce1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.217867597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=99866f4f-d19c-4007-b587-aa22d2968ce1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.263799314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad4386b1-2cb8-4438-bab8-da3fbc931af2 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.263899522Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad4386b1-2cb8-4438-bab8-da3fbc931af2 name=/runtime.v1.RuntimeService/Version
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.271546186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9813ab0a-6917-47c6-8b8d-b8cc4a9fa7b4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.271953501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729448271929736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9813ab0a-6917-47c6-8b8d-b8cc4a9fa7b4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.272526873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90063699-fd93-48bb-9b65-8e1ffc1f6e0a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.272675622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90063699-fd93-48bb-9b65-8e1ffc1f6e0a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.272740069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=90063699-fd93-48bb-9b65-8e1ffc1f6e0a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.325564132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33f6c2ab-9cf2-42d9-9ac9-452ac90fa4cf name=/runtime.v1.RuntimeService/Version
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.325684231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33f6c2ab-9cf2-42d9-9ac9-452ac90fa4cf name=/runtime.v1.RuntimeService/Version
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.328722600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcb655e3-f9ba-4528-ab4e-65132ca213da name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.329214650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729448329114493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcb655e3-f9ba-4528-ab4e-65132ca213da name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.330573967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cceae94-a6b0-45b1-989d-a1860ce66a48 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.330658619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cceae94-a6b0-45b1-989d-a1860ce66a48 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 19:57:28 old-k8s-version-867585 crio[653]: time="2024-04-21 19:57:28.330695060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2cceae94-a6b0-45b1-989d-a1860ce66a48 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr21 19:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052533] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043842] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr21 19:49] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.559572] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.706661] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653397] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.066823] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075953] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.180284] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.150867] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.317680] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +7.956391] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.073092] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.574533] systemd-fstab-generator[968]: Ignoring "noauto" option for root device
	[ +11.346099] kauditd_printk_skb: 46 callbacks suppressed
	[Apr21 19:53] systemd-fstab-generator[4927]: Ignoring "noauto" option for root device
	[Apr21 19:55] systemd-fstab-generator[5208]: Ignoring "noauto" option for root device
	[  +0.069004] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:57:28 up 8 min,  0 users,  load average: 0.00, 0.12, 0.08
	Linux old-k8s-version-867585 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: goroutine 152 [runnable]:
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008d1180)
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: goroutine 153 [select]:
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b72000, 0xc000a2ad01, 0xc000b10980, 0xc00073df70, 0xc00065bec0, 0xc00065be80)
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000a2ade0, 0x0, 0x0)
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008d1180)
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 21 19:57:27 old-k8s-version-867585 kubelet[5387]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 21 19:57:27 old-k8s-version-867585 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 21 19:57:27 old-k8s-version-867585 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 21 19:57:28 old-k8s-version-867585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 21 19:57:28 old-k8s-version-867585 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 21 19:57:28 old-k8s-version-867585 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 21 19:57:28 old-k8s-version-867585 kubelet[5493]: I0421 19:57:28.335051    5493 server.go:416] Version: v1.20.0
	Apr 21 19:57:28 old-k8s-version-867585 kubelet[5493]: I0421 19:57:28.335412    5493 server.go:837] Client rotation is on, will bootstrap in background
	Apr 21 19:57:28 old-k8s-version-867585 kubelet[5493]: I0421 19:57:28.339073    5493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 21 19:57:28 old-k8s-version-867585 kubelet[5493]: W0421 19:57:28.342432    5493 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 21 19:57:28 old-k8s-version-867585 kubelet[5493]: I0421 19:57:28.343963    5493 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (238.336842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-867585" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (734.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-727235 --alsologtostderr -v=3
E0421 19:54:06.205170   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-727235 --alsologtostderr -v=3: exit status 82 (2m0.52166692s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-727235"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:52:21.404358   61484 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:52:21.404478   61484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:52:21.404490   61484 out.go:304] Setting ErrFile to fd 2...
	I0421 19:52:21.404496   61484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:52:21.404736   61484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:52:21.405027   61484 out.go:298] Setting JSON to false
	I0421 19:52:21.405124   61484 mustload.go:65] Loading cluster: embed-certs-727235
	I0421 19:52:21.405482   61484 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:52:21.405563   61484 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:52:21.405746   61484 mustload.go:65] Loading cluster: embed-certs-727235
	I0421 19:52:21.405870   61484 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:52:21.405912   61484 stop.go:39] StopHost: embed-certs-727235
	I0421 19:52:21.406453   61484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:52:21.406508   61484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:52:21.421748   61484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0421 19:52:21.422262   61484 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:52:21.422818   61484 main.go:141] libmachine: Using API Version  1
	I0421 19:52:21.422840   61484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:52:21.423211   61484 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:52:21.425673   61484 out.go:177] * Stopping node "embed-certs-727235"  ...
	I0421 19:52:21.426977   61484 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0421 19:52:21.427014   61484 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:52:21.427240   61484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0421 19:52:21.427275   61484 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:52:21.430403   61484 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:52:21.430797   61484 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:51:24 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:52:21.430819   61484 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:52:21.430974   61484 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:52:21.431143   61484 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:52:21.431304   61484 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:52:21.431483   61484 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:52:21.540073   61484 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0421 19:52:21.602493   61484 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0421 19:52:21.650410   61484 main.go:141] libmachine: Stopping "embed-certs-727235"...
	I0421 19:52:21.650440   61484 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:52:21.652266   61484 main.go:141] libmachine: (embed-certs-727235) Calling .Stop
	I0421 19:52:21.656258   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 0/120
	I0421 19:52:22.658547   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 1/120
	I0421 19:52:23.660717   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 2/120
	I0421 19:52:24.662148   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 3/120
	I0421 19:52:25.663410   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 4/120
	I0421 19:52:26.664939   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 5/120
	I0421 19:52:27.666510   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 6/120
	I0421 19:52:28.668620   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 7/120
	I0421 19:52:29.670218   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 8/120
	I0421 19:52:30.672422   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 9/120
	I0421 19:52:31.673971   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 10/120
	I0421 19:52:32.675309   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 11/120
	I0421 19:52:33.676801   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 12/120
	I0421 19:52:34.678192   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 13/120
	I0421 19:52:35.680692   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 14/120
	I0421 19:52:36.682586   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 15/120
	I0421 19:52:37.684547   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 16/120
	I0421 19:52:38.686081   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 17/120
	I0421 19:52:39.687761   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 18/120
	I0421 19:52:40.689223   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 19/120
	I0421 19:52:41.690560   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 20/120
	I0421 19:52:42.691961   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 21/120
	I0421 19:52:43.693210   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 22/120
	I0421 19:52:44.695142   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 23/120
	I0421 19:52:45.697189   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 24/120
	I0421 19:52:46.699343   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 25/120
	I0421 19:52:47.700948   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 26/120
	I0421 19:52:48.702944   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 27/120
	I0421 19:52:49.704242   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 28/120
	I0421 19:52:50.706308   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 29/120
	I0421 19:52:51.708695   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 30/120
	I0421 19:52:52.709994   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 31/120
	I0421 19:52:53.711680   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 32/120
	I0421 19:52:54.713114   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 33/120
	I0421 19:52:55.714665   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 34/120
	I0421 19:52:56.716693   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 35/120
	I0421 19:52:57.718425   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 36/120
	I0421 19:52:58.720559   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 37/120
	I0421 19:52:59.722640   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 38/120
	I0421 19:53:00.724577   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 39/120
	I0421 19:53:01.726754   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 40/120
	I0421 19:53:02.728077   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 41/120
	I0421 19:53:03.729607   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 42/120
	I0421 19:53:04.731054   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 43/120
	I0421 19:53:05.732615   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 44/120
	I0421 19:53:06.734517   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 45/120
	I0421 19:53:07.735944   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 46/120
	I0421 19:53:08.737292   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 47/120
	I0421 19:53:09.738599   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 48/120
	I0421 19:53:10.740466   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 49/120
	I0421 19:53:11.742595   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 50/120
	I0421 19:53:12.744701   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 51/120
	I0421 19:53:13.746025   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 52/120
	I0421 19:53:14.748234   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 53/120
	I0421 19:53:15.749887   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 54/120
	I0421 19:53:16.752124   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 55/120
	I0421 19:53:17.754195   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 56/120
	I0421 19:53:18.756493   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 57/120
	I0421 19:53:19.757725   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 58/120
	I0421 19:53:20.759124   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 59/120
	I0421 19:53:21.761303   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 60/120
	I0421 19:53:22.762878   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 61/120
	I0421 19:53:23.764453   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 62/120
	I0421 19:53:24.765871   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 63/120
	I0421 19:53:25.767297   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 64/120
	I0421 19:53:26.769240   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 65/120
	I0421 19:53:27.770679   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 66/120
	I0421 19:53:28.772683   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 67/120
	I0421 19:53:29.773956   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 68/120
	I0421 19:53:30.775232   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 69/120
	I0421 19:53:31.777181   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 70/120
	I0421 19:53:32.778653   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 71/120
	I0421 19:53:33.780506   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 72/120
	I0421 19:53:34.781798   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 73/120
	I0421 19:53:35.783024   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 74/120
	I0421 19:53:36.784717   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 75/120
	I0421 19:53:37.786161   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 76/120
	I0421 19:53:38.787669   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 77/120
	I0421 19:53:39.789752   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 78/120
	I0421 19:53:40.791802   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 79/120
	I0421 19:53:41.794033   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 80/120
	I0421 19:53:42.795701   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 81/120
	I0421 19:53:43.797098   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 82/120
	I0421 19:53:44.798693   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 83/120
	I0421 19:53:45.800101   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 84/120
	I0421 19:53:46.802040   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 85/120
	I0421 19:53:47.803361   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 86/120
	I0421 19:53:48.804645   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 87/120
	I0421 19:53:49.806898   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 88/120
	I0421 19:53:50.808579   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 89/120
	I0421 19:53:51.810819   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 90/120
	I0421 19:53:52.812561   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 91/120
	I0421 19:53:53.813887   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 92/120
	I0421 19:53:54.815383   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 93/120
	I0421 19:53:55.816826   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 94/120
	I0421 19:53:56.818866   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 95/120
	I0421 19:53:57.820492   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 96/120
	I0421 19:53:58.821920   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 97/120
	I0421 19:53:59.823300   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 98/120
	I0421 19:54:00.824882   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 99/120
	I0421 19:54:01.827356   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 100/120
	I0421 19:54:02.828783   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 101/120
	I0421 19:54:03.830292   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 102/120
	I0421 19:54:04.831988   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 103/120
	I0421 19:54:05.833447   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 104/120
	I0421 19:54:06.835399   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 105/120
	I0421 19:54:07.836794   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 106/120
	I0421 19:54:08.839328   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 107/120
	I0421 19:54:09.841584   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 108/120
	I0421 19:54:10.843124   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 109/120
	I0421 19:54:11.845587   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 110/120
	I0421 19:54:12.847061   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 111/120
	I0421 19:54:13.849079   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 112/120
	I0421 19:54:14.851308   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 113/120
	I0421 19:54:15.852791   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 114/120
	I0421 19:54:16.854193   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 115/120
	I0421 19:54:17.855612   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 116/120
	I0421 19:54:18.856917   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 117/120
	I0421 19:54:19.858469   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 118/120
	I0421 19:54:20.859974   61484 main.go:141] libmachine: (embed-certs-727235) Waiting for machine to stop 119/120
	I0421 19:54:21.861178   61484 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0421 19:54:21.861249   61484 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0421 19:54:21.863322   61484 out.go:177] 
	W0421 19:54:21.864827   61484 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0421 19:54:21.864848   61484 out.go:239] * 
	* 
	W0421 19:54:21.867514   61484 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:54:21.868850   61484 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-727235 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235: exit status 3 (18.487321595s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:54:40.358326   61908 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.9:22: connect: no route to host
	E0421 19:54:40.358352   61908 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.9:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-727235" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-597568 -n no-preload-597568
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-21 20:03:36.501659956 +0000 UTC m=+6125.025795405
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-597568 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-597568 logs -n 25: (1.390743354s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-867585        | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-167454       | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC | 21 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-597568                  | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC | 21 Apr 24 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:47 UTC |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-364614             | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-364614                  | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-364614 image list                           | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-411651 | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | disable-driver-mounts-411651                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-727235            | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC | 21 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-727235                 | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:54:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:54:52.830637   62197 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:54:52.830912   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.830926   62197 out.go:304] Setting ErrFile to fd 2...
	I0421 19:54:52.830932   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.831126   62197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:54:52.831742   62197 out.go:298] Setting JSON to false
	I0421 19:54:52.832674   62197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5791,"bootTime":1713723502,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:54:52.832739   62197 start.go:139] virtualization: kvm guest
	I0421 19:54:52.835455   62197 out.go:177] * [embed-certs-727235] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:54:52.837412   62197 notify.go:220] Checking for updates...
	I0421 19:54:52.837418   62197 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:54:52.839465   62197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:54:52.841250   62197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:54:52.842894   62197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:54:52.844479   62197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:54:52.845967   62197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:54:52.847931   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:54:52.848387   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.848464   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.864769   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0421 19:54:52.865105   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.865623   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.865642   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.865919   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.866109   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.866305   62197 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:54:52.866589   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.866622   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.880497   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0421 19:54:52.880874   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.881355   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.881380   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.881691   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.881883   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.916395   62197 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:54:52.917730   62197 start.go:297] selected driver: kvm2
	I0421 19:54:52.917753   62197 start.go:901] validating driver "kvm2" against &{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.917858   62197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:54:52.918512   62197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.918585   62197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:54:52.933446   62197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:54:52.933791   62197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:54:52.933845   62197 cni.go:84] Creating CNI manager for ""
	I0421 19:54:52.933858   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:54:52.933901   62197 start.go:340] cluster config:
	{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.933981   62197 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.936907   62197 out.go:177] * Starting "embed-certs-727235" primary control-plane node in "embed-certs-727235" cluster
	I0421 19:54:52.938596   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:54:52.938626   62197 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:54:52.938633   62197 cache.go:56] Caching tarball of preloaded images
	I0421 19:54:52.938690   62197 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:54:52.938701   62197 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 19:54:52.938791   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:54:52.938960   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:54:52.938995   62197 start.go:364] duration metric: took 19.691µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:54:52.939006   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:54:52.939011   62197 fix.go:54] fixHost starting: 
	I0421 19:54:52.939248   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.939274   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.953191   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0421 19:54:52.953602   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.953994   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.954024   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.954454   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.954602   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.954750   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:54:52.956153   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Running err=<nil>
	W0421 19:54:52.956167   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:54:52.958195   62197 out.go:177] * Updating the running kvm2 "embed-certs-727235" VM ...
	I0421 19:54:52.959459   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:54:52.959476   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.959678   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:54:52.961705   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962141   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:51:24 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:54:52.962165   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962245   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:54:52.962392   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962555   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962682   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:54:52.962853   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:54:52.963028   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:54:52.963038   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:54:55.842410   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:58.070842   57617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.405000958s)
	I0421 19:54:58.070936   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:54:58.089413   57617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:54:58.101786   57617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:54:58.114021   57617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:54:58.114065   57617 kubeadm.go:156] found existing configuration files:
	
	I0421 19:54:58.114126   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0421 19:54:58.124228   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:54:58.124296   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:54:58.135037   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0421 19:54:58.144890   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:54:58.144958   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:54:58.155403   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.165155   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:54:58.165207   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.175703   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0421 19:54:58.185428   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:54:58.185521   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:54:58.195328   57617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:54:58.257787   57617 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:54:58.257868   57617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:54:58.432626   57617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:54:58.432766   57617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:54:58.432943   57617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:54:58.677807   57617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:54:58.679655   57617 out.go:204]   - Generating certificates and keys ...
	I0421 19:54:58.679763   57617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:54:58.679856   57617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:54:58.679974   57617 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:54:58.680053   57617 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:54:58.680125   57617 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:54:58.680177   57617 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:54:58.681691   57617 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:54:58.682034   57617 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:54:58.682257   57617 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:54:58.682547   57617 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:54:58.682770   57617 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:54:58.682840   57617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:54:58.938223   57617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:54:58.989244   57617 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:54:59.196060   57617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:54:59.378330   57617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:54:59.435654   57617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:54:59.436159   57617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:54:59.440839   57617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:54:58.914303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:59.442694   57617 out.go:204]   - Booting up control plane ...
	I0421 19:54:59.442826   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:54:59.442942   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:54:59.443122   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:54:59.466298   57617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:54:59.469370   57617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:54:59.469656   57617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:54:59.622281   57617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:54:59.622433   57617 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:55:00.123513   57617 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.401309ms
	I0421 19:55:00.123606   57617 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:55:05.627324   57617 kubeadm.go:309] [api-check] The API server is healthy after 5.503528473s
	I0421 19:55:05.644392   57617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:55:05.666212   57617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:55:05.696150   57617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:55:05.696487   57617 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-167454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:55:05.709873   57617 kubeadm.go:309] [bootstrap-token] Using token: ypxtpg.5u6l3v2as04iv2aj
	I0421 19:55:05.711407   57617 out.go:204]   - Configuring RBAC rules ...
	I0421 19:55:05.711556   57617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:55:05.721552   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:55:05.735168   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:55:05.739580   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:55:05.743466   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:55:05.747854   57617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:55:06.034775   57617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:55:06.468585   57617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:55:07.036924   57617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:55:07.036983   57617 kubeadm.go:309] 
	I0421 19:55:07.037040   57617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:55:07.037060   57617 kubeadm.go:309] 
	I0421 19:55:07.037199   57617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:55:07.037218   57617 kubeadm.go:309] 
	I0421 19:55:07.037259   57617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:55:07.037348   57617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:55:07.037419   57617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:55:07.037433   57617 kubeadm.go:309] 
	I0421 19:55:07.037526   57617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:55:07.037540   57617 kubeadm.go:309] 
	I0421 19:55:07.037604   57617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:55:07.037615   57617 kubeadm.go:309] 
	I0421 19:55:07.037681   57617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:55:07.037760   57617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:55:07.037823   57617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:55:07.037828   57617 kubeadm.go:309] 
	I0421 19:55:07.037899   57617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:55:07.037964   57617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:55:07.037970   57617 kubeadm.go:309] 
	I0421 19:55:07.038098   57617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038255   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 19:55:07.038283   57617 kubeadm.go:309] 	--control-plane 
	I0421 19:55:07.038288   57617 kubeadm.go:309] 
	I0421 19:55:07.038400   57617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:55:07.038411   57617 kubeadm.go:309] 
	I0421 19:55:07.038517   57617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038672   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 19:55:07.038956   57617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:55:07.038982   57617 cni.go:84] Creating CNI manager for ""
	I0421 19:55:07.038998   57617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:55:07.040852   57617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 19:55:04.994338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:07.042257   57617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 19:55:07.057287   57617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 19:55:07.078228   57617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:55:07.078330   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.078390   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167454 minikube.k8s.io/updated_at=2024_04_21T19_55_07_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=default-k8s-diff-port-167454 minikube.k8s.io/primary=true
	I0421 19:55:07.128726   57617 ops.go:34] apiserver oom_adj: -16
	I0421 19:55:07.277531   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.778563   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.066312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:08.278441   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.778051   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.277768   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.777868   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.278602   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.777607   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.278260   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.777609   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.277684   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.778116   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.146347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:17.218265   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:13.278439   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:13.777901   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.278214   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.777957   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.278369   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.778113   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.277991   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.778322   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.278350   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.778144   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.278465   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.778049   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.278228   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.777615   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.945015   57617 kubeadm.go:1107] duration metric: took 12.866746923s to wait for elevateKubeSystemPrivileges
	W0421 19:55:19.945062   57617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:55:19.945073   57617 kubeadm.go:393] duration metric: took 5m11.113256567s to StartCluster
	I0421 19:55:19.945094   57617 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.945186   57617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:55:19.947618   57617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.947919   57617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.23 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:55:19.949819   57617 out.go:177] * Verifying Kubernetes components...
	I0421 19:55:19.947983   57617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:55:19.948132   57617 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:55:19.951664   57617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:55:19.951671   57617 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951685   57617 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951708   57617 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951718   57617 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-167454"
	I0421 19:55:19.951720   57617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167454"
	W0421 19:55:19.951730   57617 addons.go:243] addon storage-provisioner should already be in state true
	I0421 19:55:19.951741   57617 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.951753   57617 addons.go:243] addon metrics-server should already be in state true
	I0421 19:55:19.951766   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.951781   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.952059   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952095   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952147   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952169   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952170   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952378   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.969767   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0421 19:55:19.970291   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.971023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.971053   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.971517   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.971747   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.971966   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0421 19:55:19.972325   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0421 19:55:19.972539   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.972691   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.973050   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973075   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973313   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973336   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973408   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973712   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973986   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974023   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.974287   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974321   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.976061   57617 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.976086   57617 addons.go:243] addon default-storageclass should already be in state true
	I0421 19:55:19.976116   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.976473   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.976513   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.989851   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I0421 19:55:19.990053   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0421 19:55:19.990494   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.990573   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.991023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991039   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991170   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991197   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991380   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991527   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991556   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.991713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.993398   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995704   57617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:55:19.994181   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995594   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0421 19:55:19.997429   57617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:19.997450   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:55:19.997470   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:19.998995   57617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0421 19:55:19.997642   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.000129   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000728   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.000743   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000638   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.000805   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 19:55:20.000816   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 19:55:20.000826   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.000991   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.001147   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.001328   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.001340   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.001362   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.001763   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.002313   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:20.002335   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:20.003803   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004388   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.004404   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004602   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.004792   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.004958   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.005128   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.018016   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0421 19:55:20.018651   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.019177   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.019196   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.019422   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.019702   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:20.021066   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:20.021324   57617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.021340   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:55:20.021357   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.024124   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024503   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.024524   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024686   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.024848   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.025030   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.025184   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.214689   57617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:55:20.264530   57617 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.281976   57617 node_ready.go:49] node "default-k8s-diff-port-167454" has status "Ready":"True"
	I0421 19:55:20.281999   57617 node_ready.go:38] duration metric: took 17.434628ms for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.282007   57617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:20.297108   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:20.386102   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.408686   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 19:55:20.408706   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0421 19:55:20.416022   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:20.455756   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 19:55:20.455778   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 19:55:20.603535   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.603559   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 19:55:20.690543   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.842718   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.842753   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843074   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843148   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843163   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.843172   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.843191   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843475   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843511   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843525   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.856272   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.856294   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.856618   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.856636   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.856673   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550249   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13418491s)
	I0421 19:55:21.550297   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550305   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550577   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550654   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:21.550663   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550675   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550684   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550928   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550946   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.853935   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.853970   57617 pod_ready.go:81] duration metric: took 1.556832657s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.853984   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924815   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.924845   57617 pod_ready.go:81] duration metric: took 70.852928ms for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924857   57617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955217   57617 pod_ready.go:92] pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.955246   57617 pod_ready.go:81] duration metric: took 30.380253ms for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955259   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975065   57617 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.975094   57617 pod_ready.go:81] duration metric: took 19.818959ms for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975106   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981884   57617 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.981907   57617 pod_ready.go:81] duration metric: took 6.791796ms for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981919   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.001934   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311352362s)
	I0421 19:55:22.001984   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002000   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002311   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002369   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002330   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.002410   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002434   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002649   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002689   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002705   57617 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-167454"
	I0421 19:55:22.002713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.005036   57617 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0421 19:55:22.006362   57617 addons.go:505] duration metric: took 2.058380621s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0421 19:55:22.269772   57617 pod_ready.go:92] pod "kube-proxy-wmv4v" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.269798   57617 pod_ready.go:81] duration metric: took 287.872366ms for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.269808   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668470   57617 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.668494   57617 pod_ready.go:81] duration metric: took 398.679544ms for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668502   57617 pod_ready.go:38] duration metric: took 2.386486578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:22.668516   57617 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:55:22.668560   57617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:55:22.688191   57617 api_server.go:72] duration metric: took 2.740229162s to wait for apiserver process to appear ...
	I0421 19:55:22.688224   57617 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:55:22.688244   57617 api_server.go:253] Checking apiserver healthz at https://192.168.61.23:8444/healthz ...
	I0421 19:55:22.699424   57617 api_server.go:279] https://192.168.61.23:8444/healthz returned 200:
	ok
	I0421 19:55:22.700614   57617 api_server.go:141] control plane version: v1.30.0
	I0421 19:55:22.700636   57617 api_server.go:131] duration metric: took 12.404937ms to wait for apiserver health ...
	I0421 19:55:22.700643   57617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:55:22.873594   57617 system_pods.go:59] 9 kube-system pods found
	I0421 19:55:22.873622   57617 system_pods.go:61] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:22.873631   57617 system_pods.go:61] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:22.873635   57617 system_pods.go:61] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:22.873639   57617 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:22.873643   57617 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:22.873647   57617 system_pods.go:61] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:22.873651   57617 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:22.873657   57617 system_pods.go:61] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:22.873698   57617 system_pods.go:61] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:22.873717   57617 system_pods.go:74] duration metric: took 173.068164ms to wait for pod list to return data ...
	I0421 19:55:22.873731   57617 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:55:23.068026   57617 default_sa.go:45] found service account: "default"
	I0421 19:55:23.068053   57617 default_sa.go:55] duration metric: took 194.313071ms for default service account to be created ...
	I0421 19:55:23.068064   57617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:55:23.272118   57617 system_pods.go:86] 9 kube-system pods found
	I0421 19:55:23.272148   57617 system_pods.go:89] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:23.272156   57617 system_pods.go:89] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:23.272162   57617 system_pods.go:89] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:23.272168   57617 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:23.272173   57617 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:23.272178   57617 system_pods.go:89] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:23.272184   57617 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:23.272194   57617 system_pods.go:89] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:23.272200   57617 system_pods.go:89] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:23.272212   57617 system_pods.go:126] duration metric: took 204.142116ms to wait for k8s-apps to be running ...
	I0421 19:55:23.272231   57617 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:55:23.272283   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:23.288800   57617 system_svc.go:56] duration metric: took 16.572799ms WaitForService to wait for kubelet
	I0421 19:55:23.288829   57617 kubeadm.go:576] duration metric: took 3.340874079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:55:23.288851   57617 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:55:23.469503   57617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:55:23.469541   57617 node_conditions.go:123] node cpu capacity is 2
	I0421 19:55:23.469554   57617 node_conditions.go:105] duration metric: took 180.696423ms to run NodePressure ...
	I0421 19:55:23.469567   57617 start.go:240] waiting for startup goroutines ...
	I0421 19:55:23.469576   57617 start.go:245] waiting for cluster config update ...
	I0421 19:55:23.469590   57617 start.go:254] writing updated cluster config ...
	I0421 19:55:23.469941   57617 ssh_runner.go:195] Run: rm -f paused
	I0421 19:55:23.521989   57617 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:55:23.524030   57617 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-167454" cluster and "default" namespace by default
	I0421 19:55:23.298271   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:29.590689   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:55:29.590767   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:55:29.592377   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:29.592430   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:29.592527   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:29.592662   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:29.592794   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:29.592892   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:29.595022   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:29.595115   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:29.595190   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:29.595263   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:29.595311   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:29.595375   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:29.595433   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:29.595520   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:29.595598   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:29.595680   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:29.595775   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:29.595824   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:29.595875   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:29.595919   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:29.595982   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:29.596047   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:29.596091   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:29.596174   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:29.596256   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:29.596301   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:29.596367   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.598820   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:29.598926   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:29.598993   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:29.599054   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:29.599162   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:29.599331   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:29.599418   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:55:29.599516   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599705   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.599772   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599936   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600041   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600191   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600244   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600389   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600481   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600654   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600669   58211 kubeadm.go:309] 
	I0421 19:55:29.600702   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:55:29.600737   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:55:29.600743   58211 kubeadm.go:309] 
	I0421 19:55:29.600777   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:55:29.600810   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:55:29.600901   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:55:29.600908   58211 kubeadm.go:309] 
	I0421 19:55:29.601009   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:55:29.601057   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:55:29.601109   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:55:29.601118   58211 kubeadm.go:309] 
	I0421 19:55:29.601224   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:55:29.601323   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:55:29.601333   58211 kubeadm.go:309] 
	I0421 19:55:29.601485   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:55:29.601579   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:55:29.601646   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:55:29.601751   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:55:29.601835   58211 kubeadm.go:309] 
	W0421 19:55:29.601862   58211 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:55:29.601908   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:55:30.075850   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:30.092432   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:55:30.103405   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:55:30.103429   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:55:30.103473   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:55:30.114018   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:55:30.114073   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:55:30.124410   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:55:30.134021   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:55:30.134076   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:55:30.143946   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.153926   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:55:30.153973   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.164013   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:55:30.173459   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:55:30.173512   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:55:30.184067   58211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:55:30.259108   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:30.259195   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:30.422144   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:30.422317   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:30.422497   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:30.619194   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:30.621135   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:30.621258   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:30.621314   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:30.621437   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:30.621530   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:30.621617   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:30.621956   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:30.622478   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:30.623068   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:30.623509   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:30.624072   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:30.624110   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:30.624183   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:30.871049   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:30.931466   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:31.088680   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:31.275358   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:31.305344   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:31.307220   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:31.307289   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:31.484365   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.378329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:32.450259   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:31.486164   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:31.486312   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:31.492868   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:31.494787   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:31.496104   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:31.500190   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:38.530370   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:41.602365   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:47.682316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:50.754312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:56.834318   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:59.906313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:05.986294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:09.058300   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:11.503250   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:56:11.503361   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:11.503618   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:15.138313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:16.504469   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:16.504743   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:18.210376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:24.290344   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:27.366276   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:26.505496   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:26.505769   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:33.442294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:36.514319   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:42.594275   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:45.670298   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:46.505851   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:46.506140   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:51.746306   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:54.818338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:00.898357   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:03.974324   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:10.050360   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:13.122376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:19.202341   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:22.274304   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:26.505043   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:57:26.505356   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:57:26.505385   58211 kubeadm.go:309] 
	I0421 19:57:26.505436   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:57:26.505495   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:57:26.505505   58211 kubeadm.go:309] 
	I0421 19:57:26.505553   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:57:26.505596   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:57:26.505720   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:57:26.505730   58211 kubeadm.go:309] 
	I0421 19:57:26.505839   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:57:26.505883   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:57:26.505912   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:57:26.505919   58211 kubeadm.go:309] 
	I0421 19:57:26.506020   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:57:26.506152   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:57:26.506181   58211 kubeadm.go:309] 
	I0421 19:57:26.506346   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:57:26.506480   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:57:26.506581   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:57:26.506702   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:57:26.506721   58211 kubeadm.go:309] 
	I0421 19:57:26.507115   58211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:57:26.507237   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:57:26.507330   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:57:26.507409   58211 kubeadm.go:393] duration metric: took 8m0.981544676s to StartCluster
	I0421 19:57:26.507461   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:57:26.507523   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:57:26.556647   58211 cri.go:89] found id: ""
	I0421 19:57:26.556676   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.556687   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:57:26.556695   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:57:26.556748   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:57:26.595025   58211 cri.go:89] found id: ""
	I0421 19:57:26.595055   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.595064   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:57:26.595069   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:57:26.595143   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:57:26.634084   58211 cri.go:89] found id: ""
	I0421 19:57:26.634115   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.634126   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:57:26.634134   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:57:26.634201   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:57:26.672409   58211 cri.go:89] found id: ""
	I0421 19:57:26.672439   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.672450   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:57:26.672458   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:57:26.672515   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:57:26.720123   58211 cri.go:89] found id: ""
	I0421 19:57:26.720151   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.720159   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:57:26.720165   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:57:26.720219   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:57:26.756889   58211 cri.go:89] found id: ""
	I0421 19:57:26.756918   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.756929   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:57:26.756936   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:57:26.757044   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:57:26.802160   58211 cri.go:89] found id: ""
	I0421 19:57:26.802188   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.802197   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:57:26.802204   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:57:26.802264   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:57:26.841543   58211 cri.go:89] found id: ""
	I0421 19:57:26.841567   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.841574   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:57:26.841583   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:57:26.841598   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:57:26.894547   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:57:26.894575   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:57:26.909052   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:57:26.909077   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:57:27.002127   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:57:27.002150   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:57:27.002166   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:57:27.120460   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:57:27.120494   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0421 19:57:27.170858   58211 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:57:27.170914   58211 out.go:239] * 
	W0421 19:57:27.170969   58211 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.170990   58211 out.go:239] * 
	W0421 19:57:27.171868   58211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:57:27.174893   58211 out.go:177] 
	W0421 19:57:27.176215   58211 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.176288   58211 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:57:27.176319   58211 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:57:27.177779   58211 out.go:177] 
	I0421 19:57:28.354287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:31.426307   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:37.506302   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:40.578329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:46.658286   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:49.730290   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:55.810303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:58.882287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:04.962316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:08.038328   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:14.114282   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:17.186379   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:23.270347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:26.338313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:32.418266   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:35.494377   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:41.570277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:44.642263   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:50.722316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:53.794367   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:59.874261   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:02.946333   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:09.026296   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:12.098331   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:18.178280   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:21.250268   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:27.330277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:30.331351   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:30.331383   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331744   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:30.331770   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331983   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:30.333880   62197 machine.go:97] duration metric: took 4m37.374404361s to provisionDockerMachine
	I0421 19:59:30.333921   62197 fix.go:56] duration metric: took 4m37.394910099s for fixHost
	I0421 19:59:30.333928   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 4m37.394926037s
	W0421 19:59:30.333945   62197 start.go:713] error starting host: provision: host is not running
	W0421 19:59:30.334039   62197 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0421 19:59:30.334070   62197 start.go:728] Will try again in 5 seconds ...
	I0421 19:59:35.335761   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:59:35.335860   62197 start.go:364] duration metric: took 61.013µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:59:35.335882   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:59:35.335890   62197 fix.go:54] fixHost starting: 
	I0421 19:59:35.336171   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:59:35.336191   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:59:35.351703   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0421 19:59:35.352186   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:59:35.352723   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:59:35.352752   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:59:35.353060   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:59:35.353252   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:35.353458   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:59:35.355260   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Stopped err=<nil>
	I0421 19:59:35.355290   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	W0421 19:59:35.355474   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:59:35.357145   62197 out.go:177] * Restarting existing kvm2 VM for "embed-certs-727235" ...
	I0421 19:59:35.358345   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Start
	I0421 19:59:35.358510   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring networks are active...
	I0421 19:59:35.359250   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network default is active
	I0421 19:59:35.359533   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network mk-embed-certs-727235 is active
	I0421 19:59:35.359951   62197 main.go:141] libmachine: (embed-certs-727235) Getting domain xml...
	I0421 19:59:35.360663   62197 main.go:141] libmachine: (embed-certs-727235) Creating domain...
	I0421 19:59:36.615174   62197 main.go:141] libmachine: (embed-certs-727235) Waiting to get IP...
	I0421 19:59:36.615997   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.616369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.616421   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.616351   63337 retry.go:31] will retry after 283.711872ms: waiting for machine to come up
	I0421 19:59:36.902032   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.902618   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.902655   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.902566   63337 retry.go:31] will retry after 336.383022ms: waiting for machine to come up
	I0421 19:59:37.240117   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.240613   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.240637   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.240565   63337 retry.go:31] will retry after 468.409378ms: waiting for machine to come up
	I0421 19:59:37.711065   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.711526   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.711556   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.711481   63337 retry.go:31] will retry after 457.618649ms: waiting for machine to come up
	I0421 19:59:38.170991   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.171513   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.171542   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.171450   63337 retry.go:31] will retry after 756.497464ms: waiting for machine to come up
	I0421 19:59:38.929950   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.930460   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.930495   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.930406   63337 retry.go:31] will retry after 667.654845ms: waiting for machine to come up
	I0421 19:59:39.599112   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:39.599566   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:39.599595   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:39.599514   63337 retry.go:31] will retry after 862.586366ms: waiting for machine to come up
	I0421 19:59:40.463709   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:40.464277   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:40.464311   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:40.464216   63337 retry.go:31] will retry after 1.446407672s: waiting for machine to come up
	I0421 19:59:41.912470   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:41.912935   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:41.912967   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:41.912893   63337 retry.go:31] will retry after 1.78143514s: waiting for machine to come up
	I0421 19:59:43.695369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:43.695781   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:43.695818   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:43.695761   63337 retry.go:31] will retry after 1.850669352s: waiting for machine to come up
	I0421 19:59:45.547626   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:45.548119   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:45.548147   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:45.548063   63337 retry.go:31] will retry after 2.399567648s: waiting for machine to come up
	I0421 19:59:47.949884   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:47.950410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:47.950435   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:47.950371   63337 retry.go:31] will retry after 2.461886164s: waiting for machine to come up
	I0421 19:59:50.413594   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:50.414039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:50.414075   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:50.413995   63337 retry.go:31] will retry after 3.706995804s: waiting for machine to come up
	I0421 19:59:54.123715   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124155   62197 main.go:141] libmachine: (embed-certs-727235) Found IP for machine: 192.168.72.9
	I0421 19:59:54.124185   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has current primary IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124194   62197 main.go:141] libmachine: (embed-certs-727235) Reserving static IP address...
	I0421 19:59:54.124657   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.124687   62197 main.go:141] libmachine: (embed-certs-727235) Reserved static IP address: 192.168.72.9
	I0421 19:59:54.124708   62197 main.go:141] libmachine: (embed-certs-727235) DBG | skip adding static IP to network mk-embed-certs-727235 - found existing host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"}
	I0421 19:59:54.124723   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Getting to WaitForSSH function...
	I0421 19:59:54.124737   62197 main.go:141] libmachine: (embed-certs-727235) Waiting for SSH to be available...
	I0421 19:59:54.126889   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127295   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.127327   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH client type: external
	I0421 19:59:54.127437   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa (-rw-------)
	I0421 19:59:54.127483   62197 main.go:141] libmachine: (embed-certs-727235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:59:54.127502   62197 main.go:141] libmachine: (embed-certs-727235) DBG | About to run SSH command:
	I0421 19:59:54.127521   62197 main.go:141] libmachine: (embed-certs-727235) DBG | exit 0
	I0421 19:59:54.254733   62197 main.go:141] libmachine: (embed-certs-727235) DBG | SSH cmd err, output: <nil>: 
	I0421 19:59:54.255110   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetConfigRaw
	I0421 19:59:54.255772   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.258448   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.258834   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.258858   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.259128   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:59:54.259326   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:59:54.259348   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:54.259572   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.262235   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262666   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.262695   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262773   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.262946   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263307   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.263484   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.263693   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.263712   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:59:54.379098   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:59:54.379135   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379445   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:54.379477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379680   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.382614   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383064   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.383095   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383211   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.383422   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383585   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383748   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.383896   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.384121   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.384147   62197 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-727235 && echo "embed-certs-727235" | sudo tee /etc/hostname
	I0421 19:59:54.511915   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-727235
	
	I0421 19:59:54.511944   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.515093   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515475   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.515508   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515663   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.515865   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516024   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.516275   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.516436   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.516452   62197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-727235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-727235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-727235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:59:54.638386   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:54.638426   62197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:59:54.638450   62197 buildroot.go:174] setting up certificates
	I0421 19:59:54.638460   62197 provision.go:84] configureAuth start
	I0421 19:59:54.638468   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.638764   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.641718   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.642084   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642297   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.644790   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645154   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.645182   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645300   62197 provision.go:143] copyHostCerts
	I0421 19:59:54.645353   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:59:54.645363   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:59:54.645423   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:59:54.645506   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:59:54.645514   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:59:54.645535   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:59:54.645587   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:59:54.645594   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:59:54.645613   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:59:54.645658   62197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.embed-certs-727235 san=[127.0.0.1 192.168.72.9 embed-certs-727235 localhost minikube]
	I0421 19:59:54.847892   62197 provision.go:177] copyRemoteCerts
	I0421 19:59:54.847950   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:59:54.847974   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.850561   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.850885   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.850916   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.851070   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.851261   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.851408   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.851542   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:54.939705   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0421 19:59:54.969564   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:59:54.996643   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:59:55.023261   62197 provision.go:87] duration metric: took 384.790427ms to configureAuth
	I0421 19:59:55.023285   62197 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:59:55.023469   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:59:55.023553   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.026429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026817   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.026851   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026984   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.027176   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027309   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.027605   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.027807   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.027831   62197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:59:55.329921   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:59:55.329950   62197 machine.go:97] duration metric: took 1.070609599s to provisionDockerMachine
	I0421 19:59:55.329967   62197 start.go:293] postStartSetup for "embed-certs-727235" (driver="kvm2")
	I0421 19:59:55.329986   62197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:59:55.330007   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.330422   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:59:55.330455   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.333062   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.333463   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333642   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.333820   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.333973   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.334132   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.422655   62197 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:59:55.428020   62197 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:59:55.428049   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:59:55.428128   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:59:55.428222   62197 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:59:55.428344   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:59:55.439964   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:59:55.469927   62197 start.go:296] duration metric: took 139.939886ms for postStartSetup
	I0421 19:59:55.469977   62197 fix.go:56] duration metric: took 20.134086048s for fixHost
	I0421 19:59:55.469997   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.472590   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.472954   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.472986   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.473194   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.473438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473616   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473813   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.473993   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.474209   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.474220   62197 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:59:55.583326   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713729595.559945159
	
	I0421 19:59:55.583347   62197 fix.go:216] guest clock: 1713729595.559945159
	I0421 19:59:55.583358   62197 fix.go:229] Guest: 2024-04-21 19:59:55.559945159 +0000 UTC Remote: 2024-04-21 19:59:55.469982444 +0000 UTC m=+302.687162567 (delta=89.962715ms)
	I0421 19:59:55.583413   62197 fix.go:200] guest clock delta is within tolerance: 89.962715ms
	I0421 19:59:55.583420   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 20.24754889s
	I0421 19:59:55.583466   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.583763   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:55.586342   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586700   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.586726   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586824   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587277   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587559   62197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:59:55.587601   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.587683   62197 ssh_runner.go:195] Run: cat /version.json
	I0421 19:59:55.587721   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.590094   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590379   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590476   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590505   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590641   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590721   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590747   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590817   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.590888   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590972   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591052   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.591128   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.591172   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591276   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.676275   62197 ssh_runner.go:195] Run: systemctl --version
	I0421 19:59:55.700845   62197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:59:55.849591   62197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:59:55.856384   62197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:59:55.856444   62197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:59:55.875575   62197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:59:55.875602   62197 start.go:494] detecting cgroup driver to use...
	I0421 19:59:55.875686   62197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:59:55.892497   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:59:55.907596   62197 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:59:55.907660   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:59:55.922805   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:59:55.938117   62197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:59:56.064198   62197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:59:56.239132   62197 docker.go:233] disabling docker service ...
	I0421 19:59:56.239210   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:59:56.256188   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:59:56.271951   62197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:59:56.409651   62197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:59:56.545020   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:59:56.560474   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:59:56.581091   62197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 19:59:56.581170   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.591783   62197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:59:56.591853   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.602656   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.613491   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.624452   62197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:59:56.635277   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.646299   62197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.665973   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.677014   62197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:59:56.687289   62197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:59:56.687340   62197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:59:56.702507   62197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:59:56.723008   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:59:56.879595   62197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:59:57.034078   62197 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:59:57.034150   62197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:59:57.039565   62197 start.go:562] Will wait 60s for crictl version
	I0421 19:59:57.039621   62197 ssh_runner.go:195] Run: which crictl
	I0421 19:59:57.044006   62197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:59:57.089252   62197 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:59:57.089340   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.121283   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.160334   62197 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 19:59:57.161976   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:57.164827   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165288   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:57.165321   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165536   62197 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0421 19:59:57.170481   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:59:57.185488   62197 kubeadm.go:877] updating cluster {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-
727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:59:57.185682   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:59:57.185736   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:59:57.237246   62197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 19:59:57.237303   62197 ssh_runner.go:195] Run: which lz4
	I0421 19:59:57.241760   62197 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 19:59:57.246777   62197 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:59:57.246817   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 19:59:58.900652   62197 crio.go:462] duration metric: took 1.658935699s to copy over tarball
	I0421 19:59:58.900742   62197 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:00:01.517236   62197 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.616462501s)
	I0421 20:00:01.517268   62197 crio.go:469] duration metric: took 2.616589126s to extract the tarball
	I0421 20:00:01.517279   62197 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:00:01.560109   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:00:01.610448   62197 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:00:01.610476   62197 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:00:01.610484   62197 kubeadm.go:928] updating node { 192.168.72.9 8443 v1.30.0 crio true true} ...
	I0421 20:00:01.610605   62197 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-727235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:00:01.610711   62197 ssh_runner.go:195] Run: crio config
	I0421 20:00:01.670151   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:01.670176   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:01.670188   62197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:00:01.670210   62197 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.9 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-727235 NodeName:embed-certs-727235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:00:01.670391   62197 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-727235"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:00:01.670479   62197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:00:01.683795   62197 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:00:01.683876   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:00:01.696350   62197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0421 20:00:01.717795   62197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:00:01.739491   62197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0421 20:00:01.761288   62197 ssh_runner.go:195] Run: grep 192.168.72.9	control-plane.minikube.internal$ /etc/hosts
	I0421 20:00:01.766285   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:00:01.781727   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:00:01.913030   62197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:00:01.934347   62197 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235 for IP: 192.168.72.9
	I0421 20:00:01.934375   62197 certs.go:194] generating shared ca certs ...
	I0421 20:00:01.934395   62197 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:00:01.934541   62197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:00:01.934615   62197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:00:01.934630   62197 certs.go:256] generating profile certs ...
	I0421 20:00:01.934729   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/client.key
	I0421 20:00:01.934796   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key.2840921d
	I0421 20:00:01.934854   62197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key
	I0421 20:00:01.934994   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:00:01.935032   62197 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:00:01.935045   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:00:01.935078   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:00:01.935110   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:00:01.935141   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:00:01.935197   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:00:01.936087   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:00:01.967117   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:00:02.003800   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:00:02.048029   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:00:02.089245   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0421 20:00:02.125745   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:00:02.163109   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:00:02.196506   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:00:02.229323   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:00:02.260648   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:00:02.290829   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:00:02.322222   62197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:00:02.344701   62197 ssh_runner.go:195] Run: openssl version
	I0421 20:00:02.352355   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:00:02.366812   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372857   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372947   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.380616   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:00:02.395933   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:00:02.411591   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418090   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418172   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.425721   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:00:02.443203   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:00:02.458442   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464317   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464386   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.471351   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:00:02.484925   62197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:00:02.491028   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 20:00:02.498970   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 20:00:02.506460   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 20:00:02.514257   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 20:00:02.521253   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 20:00:02.528828   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 20:00:02.537353   62197 kubeadm.go:391] StartCluster: {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727
235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:00:02.537443   62197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:00:02.537495   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.587891   62197 cri.go:89] found id: ""
	I0421 20:00:02.587996   62197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0421 20:00:02.601571   62197 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 20:00:02.601600   62197 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 20:00:02.601606   62197 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 20:00:02.601658   62197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 20:00:02.616596   62197 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:00:02.617728   62197 kubeconfig.go:125] found "embed-certs-727235" server: "https://192.168.72.9:8443"
	I0421 20:00:02.619968   62197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:00:02.634565   62197 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.9
	I0421 20:00:02.634618   62197 kubeadm.go:1154] stopping kube-system containers ...
	I0421 20:00:02.634633   62197 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0421 20:00:02.634699   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.685251   62197 cri.go:89] found id: ""
	I0421 20:00:02.685329   62197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 20:00:02.707720   62197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:00:02.722037   62197 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:00:02.722082   62197 kubeadm.go:156] found existing configuration files:
	
	I0421 20:00:02.722140   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:00:02.735544   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:00:02.735610   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:00:02.748027   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:00:02.759766   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:00:02.759841   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:00:02.773350   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.787463   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:00:02.787519   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.802575   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:00:02.816988   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:00:02.817045   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:00:02.830215   62197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:00:02.843407   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:03.501684   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.207411   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.448982   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.525835   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.656875   62197 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:00:04.656964   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.157388   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.657897   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.717895   62197 api_server.go:72] duration metric: took 1.061019387s to wait for apiserver process to appear ...
	I0421 20:00:05.717929   62197 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:00:05.717953   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:05.718558   62197 api_server.go:269] stopped: https://192.168.72.9:8443/healthz: Get "https://192.168.72.9:8443/healthz": dial tcp 192.168.72.9:8443: connect: connection refused
	I0421 20:00:06.218281   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.703744   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.703789   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.703806   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.722219   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.722249   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.722265   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.733030   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.733061   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:09.218765   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.224083   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.224115   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:09.718435   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.726603   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.726629   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:10.218162   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:10.224240   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 200:
	ok
	I0421 20:00:10.235750   62197 api_server.go:141] control plane version: v1.30.0
	I0421 20:00:10.235778   62197 api_server.go:131] duration metric: took 4.517842889s to wait for apiserver health ...
	I0421 20:00:10.235787   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:10.235793   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:10.237625   62197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:00:10.239279   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:00:10.262918   62197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:00:10.297402   62197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:00:10.310749   62197 system_pods.go:59] 8 kube-system pods found
	I0421 20:00:10.310805   62197 system_pods.go:61] "coredns-7db6d8ff4d-52bft" [85facf66-ffda-447c-8a04-ac95ac842470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0421 20:00:10.310818   62197 system_pods.go:61] "etcd-embed-certs-727235" [e7031073-0e50-431e-ab67-eda1fa4b9f18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0421 20:00:10.310833   62197 system_pods.go:61] "kube-apiserver-embed-certs-727235" [28be3882-5790-4754-9ef6-ec8f71367757] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0421 20:00:10.310847   62197 system_pods.go:61] "kube-controller-manager-embed-certs-727235" [83da56c1-3479-47f0-936f-ef9d0e4f455d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0421 20:00:10.310854   62197 system_pods.go:61] "kube-proxy-djqh8" [307fa1e9-345f-49b9-85e5-7b20b3275b45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0421 20:00:10.310865   62197 system_pods.go:61] "kube-scheduler-embed-certs-727235" [096371b2-a9b9-4867-a7da-b540432a973b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0421 20:00:10.310884   62197 system_pods.go:61] "metrics-server-569cc877fc-959cd" [146c80ec-6ae0-4ba3-b4be-df99fbf010a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:00:10.310901   62197 system_pods.go:61] "storage-provisioner" [054513d7-51f3-40eb-b875-b73d16c7405b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0421 20:00:10.310913   62197 system_pods.go:74] duration metric: took 13.478482ms to wait for pod list to return data ...
	I0421 20:00:10.310928   62197 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:00:10.315131   62197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:00:10.315170   62197 node_conditions.go:123] node cpu capacity is 2
	I0421 20:00:10.315187   62197 node_conditions.go:105] duration metric: took 4.252168ms to run NodePressure ...
	I0421 20:00:10.315210   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:10.620925   62197 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628865   62197 kubeadm.go:733] kubelet initialised
	I0421 20:00:10.628891   62197 kubeadm.go:734] duration metric: took 7.942591ms waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628899   62197 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:00:10.635290   62197 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:12.642618   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:14.648309   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:16.143559   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:16.143590   62197 pod_ready.go:81] duration metric: took 5.508275049s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:16.143602   62197 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:18.151189   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:20.152541   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.153814   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.649883   62197 pod_ready.go:92] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.649903   62197 pod_ready.go:81] duration metric: took 6.506293522s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.649912   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655444   62197 pod_ready.go:92] pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.655460   62197 pod_ready.go:81] duration metric: took 5.541421ms for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655468   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660078   62197 pod_ready.go:92] pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.660094   62197 pod_ready.go:81] duration metric: took 4.62017ms for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660102   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664789   62197 pod_ready.go:92] pod "kube-proxy-djqh8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.664808   62197 pod_ready.go:81] duration metric: took 4.700876ms for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664816   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668836   62197 pod_ready.go:92] pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.668851   62197 pod_ready.go:81] duration metric: took 4.029823ms for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668858   62197 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:24.676797   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:26.678669   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:29.175261   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:31.176580   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:33.677232   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:36.176401   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:38.678477   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:40.679096   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:43.178439   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:45.675906   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:47.676304   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:49.678715   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:52.176666   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:54.177353   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:56.677078   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:58.680937   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:01.175866   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:03.177322   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:05.676551   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:08.176504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:10.675324   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:12.679609   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:15.177636   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:17.177938   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:19.676849   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:21.677530   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:23.679352   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:26.176177   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:28.676123   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:30.677770   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:33.176672   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:35.675473   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:37.676094   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:40.177351   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:42.675765   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:44.677504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:47.178728   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:49.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:51.676977   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:53.677967   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:56.177161   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:58.675893   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:00.676490   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:03.175994   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:05.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:08.176147   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:10.676394   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:13.176425   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:15.178380   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:17.677109   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:20.174895   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:22.176464   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:24.177654   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:26.675586   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:28.676639   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:31.176664   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:33.677030   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:36.176792   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:38.176920   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:40.180665   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:42.678395   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:45.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:47.675740   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:49.676127   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:52.179886   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:54.675602   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:56.677577   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:58.681540   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:01.179494   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:03.676002   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:06.178560   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:08.676363   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:11.176044   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:13.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:15.676011   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:17.678133   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:20.177064   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:22.676179   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:25.176206   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:27.176706   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:29.177019   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:31.677239   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.276144093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95de85bd-f8d6-452b-aeec-76186237a1a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.276343499Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95de85bd-f8d6-452b-aeec-76186237a1a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.303537555Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=65bee12a-1c53-4110-a38e-05aa0239d0e5 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.303614977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65bee12a-1c53-4110-a38e-05aa0239d0e5 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.308679096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=206b984e-fe87-4d1d-be0f-394edb2c304f name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.308788997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=206b984e-fe87-4d1d-be0f-394edb2c304f name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.312496398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7dc5921a-020e-4f0d-9660-75a8c0f286fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.312884822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729817312863115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dc5921a-020e-4f0d-9660-75a8c0f286fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.313883646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76d54e7a-78cd-4d67-969d-c3c648bd6c16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.313938101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76d54e7a-78cd-4d67-969d-c3c648bd6c16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.314118324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76d54e7a-78cd-4d67-969d-c3c648bd6c16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.359117833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cee6fd76-1b68-408c-8188-82e138c8ad67 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.359229340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cee6fd76-1b68-408c-8188-82e138c8ad67 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.360314750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc02943f-6852-436b-a881-c7c26f055b4a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.360777980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729817360753978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc02943f-6852-436b-a881-c7c26f055b4a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.361580000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef8fa3e3-8fb5-4a53-b624-03a6f4987570 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.361686832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef8fa3e3-8fb5-4a53-b624-03a6f4987570 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.361929324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef8fa3e3-8fb5-4a53-b624-03a6f4987570 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.400978135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de1cae67-deb9-48ec-a8f5-805e7cb27a10 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.401082006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de1cae67-deb9-48ec-a8f5-805e7cb27a10 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.403002794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=202d4c3b-a3e9-4aa6-85b4-9499160ec1a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.403445574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729817403348667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=202d4c3b-a3e9-4aa6-85b4-9499160ec1a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.404021269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=374b81ed-bae0-4ad5-9aaf-5ceb91dd57e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.404069252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=374b81ed-bae0-4ad5-9aaf-5ceb91dd57e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:03:37 no-preload-597568 crio[722]: time="2024-04-21 20:03:37.404243333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=374b81ed-bae0-4ad5-9aaf-5ceb91dd57e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb7b27fdecb0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   60edc6f949a12       coredns-7db6d8ff4d-vtxv7
	7875176994a40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   eb22644072be5       coredns-7db6d8ff4d-vh287
	6afa1b4b5a5b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   0e596db87b175       storage-provisioner
	370d702a2b5cd       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   c0faefa7d7740       kube-proxy-km222
	1bf9ee926ddc0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   930b5ef33edb3       etcd-no-preload-597568
	ede0e8fc4bf66       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   850e8c2c86d8d       kube-controller-manager-no-preload-597568
	a9f121b4732e0       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   2cb5c65a4ccf6       kube-scheduler-no-preload-597568
	f525f9081ae7b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   4116a636eac9c       kube-apiserver-no-preload-597568
	
	
	==> coredns [7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-597568
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-597568
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=no-preload-597568
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_54_17_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:54:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-597568
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:03:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:59:43 +0000   Sun, 21 Apr 2024 19:54:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:59:43 +0000   Sun, 21 Apr 2024 19:54:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:59:43 +0000   Sun, 21 Apr 2024 19:54:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:59:43 +0000   Sun, 21 Apr 2024 19:54:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    no-preload-597568
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43837302ba054f0dabdbc5eba4081f11
	  System UUID:                43837302-ba05-4f0d-abdb-c5eba4081f11
	  Boot ID:                    79e64129-0fbc-4036-928a-66c5cf129043
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-vh287                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-vtxv7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-597568                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-no-preload-597568             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-597568    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-km222                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-597568             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-p9f9x              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node no-preload-597568 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node no-preload-597568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node no-preload-597568 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node no-preload-597568 event: Registered Node no-preload-597568 in Controller
	
	
	==> dmesg <==
	[  +0.044183] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662461] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.527263] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.697682] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.636411] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058976] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072806] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.205746] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.136530] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.307706] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Apr21 19:49] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.054679] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.992237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +3.014645] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.864279] kauditd_printk_skb: 53 callbacks suppressed
	[ +11.041209] kauditd_printk_skb: 24 callbacks suppressed
	[Apr21 19:54] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.632779] systemd-fstab-generator[4064]: Ignoring "noauto" option for root device
	[  +4.579608] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.991666] systemd-fstab-generator[4390]: Ignoring "noauto" option for root device
	[ +14.414700] systemd-fstab-generator[4595]: Ignoring "noauto" option for root device
	[  +0.080240] kauditd_printk_skb: 14 callbacks suppressed
	[Apr21 19:55] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c] <==
	{"level":"info","ts":"2024-04-21T19:54:12.075523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-21T19:54:12.075566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgPreVoteResp from af2c917f7a70ddd0 at term 1"}
	{"level":"info","ts":"2024-04-21T19:54:12.07558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became candidate at term 2"}
	{"level":"info","ts":"2024-04-21T19:54:12.075586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgVoteResp from af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-04-21T19:54:12.0756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became leader at term 2"}
	{"level":"info","ts":"2024-04-21T19:54:12.075608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af2c917f7a70ddd0 elected leader af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-04-21T19:54:12.079138Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"af2c917f7a70ddd0","local-member-attributes":"{Name:no-preload-597568 ClientURLs:[https://192.168.39.120:2379]}","request-path":"/0/members/af2c917f7a70ddd0/attributes","cluster-id":"f3de5e1602edc73b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:54:12.079315Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.079525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:54:12.081443Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:54:12.081485Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T19:54:12.081519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.081567Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.081581Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.081626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:54:12.086145Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.120:2379"}
	{"level":"info","ts":"2024-04-21T19:54:12.098856Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T20:00:03.332345Z","caller":"traceutil/trace.go:171","msg":"trace[83220497] transaction","detail":"{read_only:false; response_revision:722; number_of_response:1; }","duration":"572.548804ms","start":"2024-04-21T20:00:02.759721Z","end":"2024-04-21T20:00:03.33227Z","steps":["trace[83220497] 'process raft request'  (duration: 572.303848ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:00:03.332439Z","caller":"traceutil/trace.go:171","msg":"trace[1553232862] linearizableReadLoop","detail":"{readStateIndex:807; appliedIndex:807; }","duration":"146.881785ms","start":"2024-04-21T20:00:03.185469Z","end":"2024-04-21T20:00:03.332351Z","steps":["trace[1553232862] 'read index received'  (duration: 146.872528ms)","trace[1553232862] 'applied index is now lower than readState.Index'  (duration: 7.607µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:00:03.332714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.136527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:00:03.334356Z","caller":"traceutil/trace.go:171","msg":"trace[1848858816] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:722; }","duration":"148.893138ms","start":"2024-04-21T20:00:03.185442Z","end":"2024-04-21T20:00:03.334335Z","steps":["trace[1848858816] 'agreement among raft nodes before linearized reading'  (duration: 147.099037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:00:03.33453Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:00:02.759702Z","time spent":"573.631438ms","remote":"127.0.0.1:55998","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:720 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-21T20:00:03.799342Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.295474ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15983432317249636202 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-597568\" mod_revision:715 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-21T20:00:03.799523Z","caller":"traceutil/trace.go:171","msg":"trace[1524348268] transaction","detail":"{read_only:false; response_revision:723; number_of_response:1; }","duration":"537.425042ms","start":"2024-04-21T20:00:03.262084Z","end":"2024-04-21T20:00:03.799509Z","steps":["trace[1524348268] 'process raft request'  (duration: 199.69156ms)","trace[1524348268] 'compare'  (duration: 337.043778ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:00:03.799586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:00:03.262067Z","time spent":"537.486143ms","remote":"127.0.0.1:56098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":556,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-597568\" mod_revision:715 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" > >"}
	
	
	==> kernel <==
	 20:03:37 up 14 min,  0 users,  load average: 0.05, 0.15, 0.12
	Linux no-preload-597568 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e] <==
	W0421 19:59:15.078225       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 19:59:15.078457       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 19:59:15.079741       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0421 20:00:03.335787       1 trace.go:236] Trace[1135535222]: "Update" accept:application/json, */*,audit-id:bd92b2c3-eaf8-43fd-b48b-4d1983041d71,client:192.168.39.120,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (21-Apr-2024 20:00:02.758) (total time: 577ms):
	Trace[1135535222]: ["GuaranteedUpdate etcd3" audit-id:bd92b2c3-eaf8-43fd-b48b-4d1983041d71,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 577ms (20:00:02.758)
	Trace[1135535222]:  ---"Txn call completed" 576ms (20:00:03.335)]
	Trace[1135535222]: [577.418124ms] [577.418124ms] END
	I0421 20:00:03.800428       1 trace.go:236] Trace[592044530]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7cc55835-be10-470d-96e3-b75522acf244,client:192.168.39.120,api-group:coordination.k8s.io,api-version:v1,name:no-preload-597568,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-597568,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (21-Apr-2024 20:00:03.260) (total time: 539ms):
	Trace[592044530]: ["GuaranteedUpdate etcd3" audit-id:7cc55835-be10-470d-96e3-b75522acf244,key:/leases/kube-node-lease/no-preload-597568,type:*coordination.Lease,resource:leases.coordination.k8s.io 539ms (20:00:03.260)
	Trace[592044530]:  ---"Txn call completed" 538ms (20:00:03.800)]
	Trace[592044530]: [539.566862ms] [539.566862ms] END
	W0421 20:00:15.079123       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:00:15.079486       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:00:15.079561       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:00:15.080481       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:00:15.080662       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:00:15.080700       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:02:15.080588       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:02:15.080728       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:02:15.080737       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:02:15.080806       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:02:15.080823       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:02:15.081990       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d] <==
	I0421 19:58:05.998899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="99.214µs"
	E0421 19:58:30.885707       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 19:58:31.365780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 19:59:00.892742       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 19:59:01.382346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 19:59:30.901351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 19:59:31.392597       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:00:00.907279       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:00:01.401026       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:00:30.914147       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:00:31.410900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:00:34.002670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="390.69µs"
	I0421 20:00:49.003606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="656.278µs"
	E0421 20:01:00.924149       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:01:01.428244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:01:30.930526       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:01:31.436535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:02:00.936188       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:02:01.446036       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:02:30.943290       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:02:31.455587       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:03:00.950810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:03:01.473969       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:03:30.958203       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:03:31.483104       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32] <==
	I0421 19:54:31.502670       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:54:31.523322       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	I0421 19:54:31.605900       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:54:31.605963       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:54:31.606001       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:54:31.613091       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:54:31.613268       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:54:31.613310       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:54:31.614830       1 config.go:192] "Starting service config controller"
	I0421 19:54:31.614844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:54:31.614864       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:54:31.614867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:54:31.615156       1 config.go:319] "Starting node config controller"
	I0421 19:54:31.615195       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:54:31.715927       1 shared_informer.go:320] Caches are synced for node config
	I0421 19:54:31.715958       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:54:31.715989       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281] <==
	W0421 19:54:15.043354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 19:54:15.043472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:54:15.057487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 19:54:15.057540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 19:54:15.070311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:54:15.070503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:54:15.084859       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:54:15.085123       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:54:15.112854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 19:54:15.113044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 19:54:15.118974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:54:15.119004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:54:15.139305       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 19:54:15.141052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 19:54:15.161457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 19:54:15.161654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 19:54:15.232530       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 19:54:15.232660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 19:54:15.249583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:54:15.249736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:54:15.519244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:54:15.519302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:54:15.542666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:54:15.542730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0421 19:54:17.034699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:01:17 no-preload-597568 kubelet[4398]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:01:17 no-preload-597568 kubelet[4398]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:01:17 no-preload-597568 kubelet[4398]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:01:17 no-preload-597568 kubelet[4398]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:01:24 no-preload-597568 kubelet[4398]: E0421 20:01:24.984326    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:01:35 no-preload-597568 kubelet[4398]: E0421 20:01:35.987326    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:01:48 no-preload-597568 kubelet[4398]: E0421 20:01:48.984632    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:02:00 no-preload-597568 kubelet[4398]: E0421 20:02:00.984040    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:02:11 no-preload-597568 kubelet[4398]: E0421 20:02:11.983987    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:02:17 no-preload-597568 kubelet[4398]: E0421 20:02:17.046305    4398 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:02:17 no-preload-597568 kubelet[4398]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:02:17 no-preload-597568 kubelet[4398]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:02:17 no-preload-597568 kubelet[4398]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:02:17 no-preload-597568 kubelet[4398]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:02:22 no-preload-597568 kubelet[4398]: E0421 20:02:22.985359    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:02:36 no-preload-597568 kubelet[4398]: E0421 20:02:36.985870    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:02:47 no-preload-597568 kubelet[4398]: E0421 20:02:47.984293    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:03:02 no-preload-597568 kubelet[4398]: E0421 20:03:02.986951    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:03:14 no-preload-597568 kubelet[4398]: E0421 20:03:14.988687    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:03:17 no-preload-597568 kubelet[4398]: E0421 20:03:17.040750    4398 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:03:17 no-preload-597568 kubelet[4398]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:03:17 no-preload-597568 kubelet[4398]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:03:17 no-preload-597568 kubelet[4398]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:03:17 no-preload-597568 kubelet[4398]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:03:28 no-preload-597568 kubelet[4398]: E0421 20:03:28.983336    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	
	
	==> storage-provisioner [6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36] <==
	I0421 19:54:32.683932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 19:54:32.796528       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 19:54:32.796680       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 19:54:32.815748       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 19:54:32.816004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-597568_91fac8ad-76ca-4123-b811-61aedbd9e6e6!
	I0421 19:54:32.822868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"098c7d3e-8032-4e18-b0a7-71897245390c", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-597568_91fac8ad-76ca-4123-b811-61aedbd9e6e6 became leader
	I0421 19:54:32.916991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-597568_91fac8ad-76ca-4123-b811-61aedbd9e6e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-597568 -n no-preload-597568
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-597568 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-p9f9x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-597568 describe pod metrics-server-569cc877fc-p9f9x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-597568 describe pod metrics-server-569cc877fc-p9f9x: exit status 1 (66.149275ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-p9f9x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-597568 describe pod metrics-server-569cc877fc-p9f9x: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235: exit status 3 (3.19600812s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:54:43.554407   62071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.9:22: connect: no route to host
	E0421 19:54:43.554427   62071 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.9:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-727235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-727235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154183602s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-727235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235: exit status 3 (3.065942195s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0421 19:54:52.774510   62151 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.9:22: connect: no route to host
	E0421 19:54:52.774545   62151 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.9:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-727235" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0421 19:56:09.207817   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-21 20:04:24.112354308 +0000 UTC m=+6172.636489748
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-167454 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-167454 logs -n 25: (1.497418603s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-867585        | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-167454       | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC | 21 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-597568                  | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC | 21 Apr 24 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:47 UTC |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-364614             | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-364614                  | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-364614 image list                           | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-411651 | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | disable-driver-mounts-411651                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-727235            | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC | 21 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-727235                 | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:54:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:54:52.830637   62197 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:54:52.830912   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.830926   62197 out.go:304] Setting ErrFile to fd 2...
	I0421 19:54:52.830932   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.831126   62197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:54:52.831742   62197 out.go:298] Setting JSON to false
	I0421 19:54:52.832674   62197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5791,"bootTime":1713723502,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:54:52.832739   62197 start.go:139] virtualization: kvm guest
	I0421 19:54:52.835455   62197 out.go:177] * [embed-certs-727235] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:54:52.837412   62197 notify.go:220] Checking for updates...
	I0421 19:54:52.837418   62197 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:54:52.839465   62197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:54:52.841250   62197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:54:52.842894   62197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:54:52.844479   62197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:54:52.845967   62197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:54:52.847931   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:54:52.848387   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.848464   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.864769   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0421 19:54:52.865105   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.865623   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.865642   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.865919   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.866109   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.866305   62197 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:54:52.866589   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.866622   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.880497   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0421 19:54:52.880874   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.881355   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.881380   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.881691   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.881883   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.916395   62197 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:54:52.917730   62197 start.go:297] selected driver: kvm2
	I0421 19:54:52.917753   62197 start.go:901] validating driver "kvm2" against &{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.917858   62197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:54:52.918512   62197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.918585   62197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:54:52.933446   62197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:54:52.933791   62197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:54:52.933845   62197 cni.go:84] Creating CNI manager for ""
	I0421 19:54:52.933858   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:54:52.933901   62197 start.go:340] cluster config:
	{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.933981   62197 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.936907   62197 out.go:177] * Starting "embed-certs-727235" primary control-plane node in "embed-certs-727235" cluster
	I0421 19:54:52.938596   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:54:52.938626   62197 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:54:52.938633   62197 cache.go:56] Caching tarball of preloaded images
	I0421 19:54:52.938690   62197 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:54:52.938701   62197 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 19:54:52.938791   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:54:52.938960   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:54:52.938995   62197 start.go:364] duration metric: took 19.691µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:54:52.939006   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:54:52.939011   62197 fix.go:54] fixHost starting: 
	I0421 19:54:52.939248   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.939274   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.953191   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0421 19:54:52.953602   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.953994   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.954024   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.954454   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.954602   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.954750   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:54:52.956153   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Running err=<nil>
	W0421 19:54:52.956167   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:54:52.958195   62197 out.go:177] * Updating the running kvm2 "embed-certs-727235" VM ...
	I0421 19:54:52.959459   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:54:52.959476   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.959678   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:54:52.961705   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962141   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:51:24 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:54:52.962165   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962245   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:54:52.962392   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962555   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962682   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:54:52.962853   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:54:52.963028   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:54:52.963038   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:54:55.842410   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:58.070842   57617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.405000958s)
	I0421 19:54:58.070936   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:54:58.089413   57617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:54:58.101786   57617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:54:58.114021   57617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:54:58.114065   57617 kubeadm.go:156] found existing configuration files:
	
	I0421 19:54:58.114126   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0421 19:54:58.124228   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:54:58.124296   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:54:58.135037   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0421 19:54:58.144890   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:54:58.144958   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:54:58.155403   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.165155   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:54:58.165207   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.175703   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0421 19:54:58.185428   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:54:58.185521   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:54:58.195328   57617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:54:58.257787   57617 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:54:58.257868   57617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:54:58.432626   57617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:54:58.432766   57617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:54:58.432943   57617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:54:58.677807   57617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:54:58.679655   57617 out.go:204]   - Generating certificates and keys ...
	I0421 19:54:58.679763   57617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:54:58.679856   57617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:54:58.679974   57617 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:54:58.680053   57617 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:54:58.680125   57617 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:54:58.680177   57617 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:54:58.681691   57617 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:54:58.682034   57617 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:54:58.682257   57617 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:54:58.682547   57617 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:54:58.682770   57617 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:54:58.682840   57617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:54:58.938223   57617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:54:58.989244   57617 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:54:59.196060   57617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:54:59.378330   57617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:54:59.435654   57617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:54:59.436159   57617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:54:59.440839   57617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:54:58.914303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:59.442694   57617 out.go:204]   - Booting up control plane ...
	I0421 19:54:59.442826   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:54:59.442942   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:54:59.443122   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:54:59.466298   57617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:54:59.469370   57617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:54:59.469656   57617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:54:59.622281   57617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:54:59.622433   57617 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:55:00.123513   57617 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.401309ms
	I0421 19:55:00.123606   57617 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:55:05.627324   57617 kubeadm.go:309] [api-check] The API server is healthy after 5.503528473s
	I0421 19:55:05.644392   57617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:55:05.666212   57617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:55:05.696150   57617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:55:05.696487   57617 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-167454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:55:05.709873   57617 kubeadm.go:309] [bootstrap-token] Using token: ypxtpg.5u6l3v2as04iv2aj
	I0421 19:55:05.711407   57617 out.go:204]   - Configuring RBAC rules ...
	I0421 19:55:05.711556   57617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:55:05.721552   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:55:05.735168   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:55:05.739580   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:55:05.743466   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:55:05.747854   57617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:55:06.034775   57617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:55:06.468585   57617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:55:07.036924   57617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:55:07.036983   57617 kubeadm.go:309] 
	I0421 19:55:07.037040   57617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:55:07.037060   57617 kubeadm.go:309] 
	I0421 19:55:07.037199   57617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:55:07.037218   57617 kubeadm.go:309] 
	I0421 19:55:07.037259   57617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:55:07.037348   57617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:55:07.037419   57617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:55:07.037433   57617 kubeadm.go:309] 
	I0421 19:55:07.037526   57617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:55:07.037540   57617 kubeadm.go:309] 
	I0421 19:55:07.037604   57617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:55:07.037615   57617 kubeadm.go:309] 
	I0421 19:55:07.037681   57617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:55:07.037760   57617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:55:07.037823   57617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:55:07.037828   57617 kubeadm.go:309] 
	I0421 19:55:07.037899   57617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:55:07.037964   57617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:55:07.037970   57617 kubeadm.go:309] 
	I0421 19:55:07.038098   57617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038255   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 19:55:07.038283   57617 kubeadm.go:309] 	--control-plane 
	I0421 19:55:07.038288   57617 kubeadm.go:309] 
	I0421 19:55:07.038400   57617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:55:07.038411   57617 kubeadm.go:309] 
	I0421 19:55:07.038517   57617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038672   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 19:55:07.038956   57617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:55:07.038982   57617 cni.go:84] Creating CNI manager for ""
	I0421 19:55:07.038998   57617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:55:07.040852   57617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 19:55:04.994338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:07.042257   57617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 19:55:07.057287   57617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 19:55:07.078228   57617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:55:07.078330   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.078390   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167454 minikube.k8s.io/updated_at=2024_04_21T19_55_07_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=default-k8s-diff-port-167454 minikube.k8s.io/primary=true
	I0421 19:55:07.128726   57617 ops.go:34] apiserver oom_adj: -16
	I0421 19:55:07.277531   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.778563   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.066312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:08.278441   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.778051   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.277768   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.777868   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.278602   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.777607   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.278260   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.777609   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.277684   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.778116   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.146347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:17.218265   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:13.278439   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:13.777901   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.278214   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.777957   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.278369   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.778113   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.277991   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.778322   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.278350   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.778144   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.278465   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.778049   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.278228   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.777615   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.945015   57617 kubeadm.go:1107] duration metric: took 12.866746923s to wait for elevateKubeSystemPrivileges
	W0421 19:55:19.945062   57617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:55:19.945073   57617 kubeadm.go:393] duration metric: took 5m11.113256567s to StartCluster
	I0421 19:55:19.945094   57617 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.945186   57617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:55:19.947618   57617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.947919   57617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.23 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:55:19.949819   57617 out.go:177] * Verifying Kubernetes components...
	I0421 19:55:19.947983   57617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:55:19.948132   57617 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:55:19.951664   57617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:55:19.951671   57617 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951685   57617 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951708   57617 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951718   57617 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-167454"
	I0421 19:55:19.951720   57617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167454"
	W0421 19:55:19.951730   57617 addons.go:243] addon storage-provisioner should already be in state true
	I0421 19:55:19.951741   57617 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.951753   57617 addons.go:243] addon metrics-server should already be in state true
	I0421 19:55:19.951766   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.951781   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.952059   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952095   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952147   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952169   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952170   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952378   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.969767   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0421 19:55:19.970291   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.971023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.971053   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.971517   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.971747   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.971966   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0421 19:55:19.972325   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0421 19:55:19.972539   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.972691   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.973050   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973075   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973313   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973336   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973408   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973712   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973986   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974023   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.974287   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974321   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.976061   57617 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.976086   57617 addons.go:243] addon default-storageclass should already be in state true
	I0421 19:55:19.976116   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.976473   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.976513   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.989851   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I0421 19:55:19.990053   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0421 19:55:19.990494   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.990573   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.991023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991039   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991170   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991197   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991380   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991527   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991556   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.991713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.993398   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995704   57617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:55:19.994181   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995594   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0421 19:55:19.997429   57617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:19.997450   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:55:19.997470   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:19.998995   57617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0421 19:55:19.997642   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.000129   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000728   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.000743   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000638   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.000805   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 19:55:20.000816   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 19:55:20.000826   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.000991   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.001147   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.001328   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.001340   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.001362   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.001763   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.002313   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:20.002335   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:20.003803   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004388   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.004404   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004602   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.004792   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.004958   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.005128   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.018016   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0421 19:55:20.018651   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.019177   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.019196   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.019422   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.019702   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:20.021066   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:20.021324   57617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.021340   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:55:20.021357   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.024124   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024503   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.024524   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024686   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.024848   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.025030   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.025184   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.214689   57617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:55:20.264530   57617 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.281976   57617 node_ready.go:49] node "default-k8s-diff-port-167454" has status "Ready":"True"
	I0421 19:55:20.281999   57617 node_ready.go:38] duration metric: took 17.434628ms for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.282007   57617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:20.297108   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:20.386102   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.408686   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 19:55:20.408706   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0421 19:55:20.416022   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:20.455756   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 19:55:20.455778   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 19:55:20.603535   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.603559   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 19:55:20.690543   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.842718   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.842753   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843074   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843148   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843163   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.843172   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.843191   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843475   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843511   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843525   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.856272   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.856294   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.856618   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.856636   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.856673   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550249   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13418491s)
	I0421 19:55:21.550297   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550305   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550577   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550654   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:21.550663   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550675   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550684   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550928   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550946   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.853935   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.853970   57617 pod_ready.go:81] duration metric: took 1.556832657s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.853984   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924815   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.924845   57617 pod_ready.go:81] duration metric: took 70.852928ms for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924857   57617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955217   57617 pod_ready.go:92] pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.955246   57617 pod_ready.go:81] duration metric: took 30.380253ms for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955259   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975065   57617 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.975094   57617 pod_ready.go:81] duration metric: took 19.818959ms for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975106   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981884   57617 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.981907   57617 pod_ready.go:81] duration metric: took 6.791796ms for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981919   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.001934   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311352362s)
	I0421 19:55:22.001984   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002000   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002311   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002369   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002330   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.002410   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002434   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002649   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002689   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002705   57617 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-167454"
	I0421 19:55:22.002713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.005036   57617 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0421 19:55:22.006362   57617 addons.go:505] duration metric: took 2.058380621s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0421 19:55:22.269772   57617 pod_ready.go:92] pod "kube-proxy-wmv4v" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.269798   57617 pod_ready.go:81] duration metric: took 287.872366ms for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.269808   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668470   57617 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.668494   57617 pod_ready.go:81] duration metric: took 398.679544ms for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668502   57617 pod_ready.go:38] duration metric: took 2.386486578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:22.668516   57617 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:55:22.668560   57617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:55:22.688191   57617 api_server.go:72] duration metric: took 2.740229162s to wait for apiserver process to appear ...
	I0421 19:55:22.688224   57617 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:55:22.688244   57617 api_server.go:253] Checking apiserver healthz at https://192.168.61.23:8444/healthz ...
	I0421 19:55:22.699424   57617 api_server.go:279] https://192.168.61.23:8444/healthz returned 200:
	ok
	I0421 19:55:22.700614   57617 api_server.go:141] control plane version: v1.30.0
	I0421 19:55:22.700636   57617 api_server.go:131] duration metric: took 12.404937ms to wait for apiserver health ...
	I0421 19:55:22.700643   57617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:55:22.873594   57617 system_pods.go:59] 9 kube-system pods found
	I0421 19:55:22.873622   57617 system_pods.go:61] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:22.873631   57617 system_pods.go:61] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:22.873635   57617 system_pods.go:61] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:22.873639   57617 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:22.873643   57617 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:22.873647   57617 system_pods.go:61] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:22.873651   57617 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:22.873657   57617 system_pods.go:61] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:22.873698   57617 system_pods.go:61] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:22.873717   57617 system_pods.go:74] duration metric: took 173.068164ms to wait for pod list to return data ...
	I0421 19:55:22.873731   57617 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:55:23.068026   57617 default_sa.go:45] found service account: "default"
	I0421 19:55:23.068053   57617 default_sa.go:55] duration metric: took 194.313071ms for default service account to be created ...
	I0421 19:55:23.068064   57617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:55:23.272118   57617 system_pods.go:86] 9 kube-system pods found
	I0421 19:55:23.272148   57617 system_pods.go:89] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:23.272156   57617 system_pods.go:89] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:23.272162   57617 system_pods.go:89] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:23.272168   57617 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:23.272173   57617 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:23.272178   57617 system_pods.go:89] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:23.272184   57617 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:23.272194   57617 system_pods.go:89] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:23.272200   57617 system_pods.go:89] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:23.272212   57617 system_pods.go:126] duration metric: took 204.142116ms to wait for k8s-apps to be running ...
	I0421 19:55:23.272231   57617 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:55:23.272283   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:23.288800   57617 system_svc.go:56] duration metric: took 16.572799ms WaitForService to wait for kubelet
	I0421 19:55:23.288829   57617 kubeadm.go:576] duration metric: took 3.340874079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:55:23.288851   57617 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:55:23.469503   57617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:55:23.469541   57617 node_conditions.go:123] node cpu capacity is 2
	I0421 19:55:23.469554   57617 node_conditions.go:105] duration metric: took 180.696423ms to run NodePressure ...
	I0421 19:55:23.469567   57617 start.go:240] waiting for startup goroutines ...
	I0421 19:55:23.469576   57617 start.go:245] waiting for cluster config update ...
	I0421 19:55:23.469590   57617 start.go:254] writing updated cluster config ...
	I0421 19:55:23.469941   57617 ssh_runner.go:195] Run: rm -f paused
	I0421 19:55:23.521989   57617 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:55:23.524030   57617 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-167454" cluster and "default" namespace by default
	I0421 19:55:23.298271   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:29.590689   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:55:29.590767   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:55:29.592377   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:29.592430   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:29.592527   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:29.592662   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:29.592794   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:29.592892   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:29.595022   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:29.595115   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:29.595190   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:29.595263   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:29.595311   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:29.595375   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:29.595433   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:29.595520   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:29.595598   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:29.595680   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:29.595775   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:29.595824   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:29.595875   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:29.595919   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:29.595982   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:29.596047   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:29.596091   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:29.596174   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:29.596256   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:29.596301   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:29.596367   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.598820   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:29.598926   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:29.598993   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:29.599054   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:29.599162   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:29.599331   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:29.599418   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:55:29.599516   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599705   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.599772   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599936   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600041   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600191   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600244   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600389   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600481   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600654   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600669   58211 kubeadm.go:309] 
	I0421 19:55:29.600702   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:55:29.600737   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:55:29.600743   58211 kubeadm.go:309] 
	I0421 19:55:29.600777   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:55:29.600810   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:55:29.600901   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:55:29.600908   58211 kubeadm.go:309] 
	I0421 19:55:29.601009   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:55:29.601057   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:55:29.601109   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:55:29.601118   58211 kubeadm.go:309] 
	I0421 19:55:29.601224   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:55:29.601323   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:55:29.601333   58211 kubeadm.go:309] 
	I0421 19:55:29.601485   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:55:29.601579   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:55:29.601646   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:55:29.601751   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:55:29.601835   58211 kubeadm.go:309] 
	W0421 19:55:29.601862   58211 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:55:29.601908   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:55:30.075850   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:30.092432   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:55:30.103405   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:55:30.103429   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:55:30.103473   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:55:30.114018   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:55:30.114073   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:55:30.124410   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:55:30.134021   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:55:30.134076   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:55:30.143946   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.153926   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:55:30.153973   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.164013   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:55:30.173459   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:55:30.173512   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:55:30.184067   58211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:55:30.259108   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:30.259195   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:30.422144   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:30.422317   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:30.422497   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:30.619194   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:30.621135   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:30.621258   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:30.621314   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:30.621437   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:30.621530   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:30.621617   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:30.621956   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:30.622478   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:30.623068   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:30.623509   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:30.624072   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:30.624110   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:30.624183   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:30.871049   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:30.931466   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:31.088680   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:31.275358   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:31.305344   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:31.307220   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:31.307289   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:31.484365   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.378329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:32.450259   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:31.486164   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:31.486312   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:31.492868   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:31.494787   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:31.496104   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:31.500190   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:38.530370   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:41.602365   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:47.682316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:50.754312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:56.834318   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:59.906313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:05.986294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:09.058300   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:11.503250   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:56:11.503361   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:11.503618   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:15.138313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:16.504469   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:16.504743   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:18.210376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:24.290344   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:27.366276   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:26.505496   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:26.505769   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:33.442294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:36.514319   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:42.594275   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:45.670298   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:46.505851   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:46.506140   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:51.746306   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:54.818338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:00.898357   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:03.974324   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:10.050360   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:13.122376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:19.202341   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:22.274304   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:26.505043   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:57:26.505356   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:57:26.505385   58211 kubeadm.go:309] 
	I0421 19:57:26.505436   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:57:26.505495   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:57:26.505505   58211 kubeadm.go:309] 
	I0421 19:57:26.505553   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:57:26.505596   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:57:26.505720   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:57:26.505730   58211 kubeadm.go:309] 
	I0421 19:57:26.505839   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:57:26.505883   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:57:26.505912   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:57:26.505919   58211 kubeadm.go:309] 
	I0421 19:57:26.506020   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:57:26.506152   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:57:26.506181   58211 kubeadm.go:309] 
	I0421 19:57:26.506346   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:57:26.506480   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:57:26.506581   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:57:26.506702   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:57:26.506721   58211 kubeadm.go:309] 
	I0421 19:57:26.507115   58211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:57:26.507237   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:57:26.507330   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:57:26.507409   58211 kubeadm.go:393] duration metric: took 8m0.981544676s to StartCluster
	I0421 19:57:26.507461   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:57:26.507523   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:57:26.556647   58211 cri.go:89] found id: ""
	I0421 19:57:26.556676   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.556687   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:57:26.556695   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:57:26.556748   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:57:26.595025   58211 cri.go:89] found id: ""
	I0421 19:57:26.595055   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.595064   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:57:26.595069   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:57:26.595143   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:57:26.634084   58211 cri.go:89] found id: ""
	I0421 19:57:26.634115   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.634126   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:57:26.634134   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:57:26.634201   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:57:26.672409   58211 cri.go:89] found id: ""
	I0421 19:57:26.672439   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.672450   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:57:26.672458   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:57:26.672515   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:57:26.720123   58211 cri.go:89] found id: ""
	I0421 19:57:26.720151   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.720159   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:57:26.720165   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:57:26.720219   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:57:26.756889   58211 cri.go:89] found id: ""
	I0421 19:57:26.756918   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.756929   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:57:26.756936   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:57:26.757044   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:57:26.802160   58211 cri.go:89] found id: ""
	I0421 19:57:26.802188   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.802197   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:57:26.802204   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:57:26.802264   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:57:26.841543   58211 cri.go:89] found id: ""
	I0421 19:57:26.841567   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.841574   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:57:26.841583   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:57:26.841598   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:57:26.894547   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:57:26.894575   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:57:26.909052   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:57:26.909077   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:57:27.002127   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:57:27.002150   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:57:27.002166   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:57:27.120460   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:57:27.120494   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0421 19:57:27.170858   58211 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:57:27.170914   58211 out.go:239] * 
	W0421 19:57:27.170969   58211 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.170990   58211 out.go:239] * 
	W0421 19:57:27.171868   58211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:57:27.174893   58211 out.go:177] 
	W0421 19:57:27.176215   58211 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.176288   58211 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:57:27.176319   58211 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:57:27.177779   58211 out.go:177] 
	I0421 19:57:28.354287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:31.426307   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:37.506302   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:40.578329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:46.658286   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:49.730290   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:55.810303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:58.882287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:04.962316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:08.038328   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:14.114282   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:17.186379   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:23.270347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:26.338313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:32.418266   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:35.494377   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:41.570277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:44.642263   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:50.722316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:53.794367   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:59.874261   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:02.946333   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:09.026296   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:12.098331   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:18.178280   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:21.250268   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:27.330277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:30.331351   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:30.331383   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331744   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:30.331770   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331983   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:30.333880   62197 machine.go:97] duration metric: took 4m37.374404361s to provisionDockerMachine
	I0421 19:59:30.333921   62197 fix.go:56] duration metric: took 4m37.394910099s for fixHost
	I0421 19:59:30.333928   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 4m37.394926037s
	W0421 19:59:30.333945   62197 start.go:713] error starting host: provision: host is not running
	W0421 19:59:30.334039   62197 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0421 19:59:30.334070   62197 start.go:728] Will try again in 5 seconds ...
	I0421 19:59:35.335761   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:59:35.335860   62197 start.go:364] duration metric: took 61.013µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:59:35.335882   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:59:35.335890   62197 fix.go:54] fixHost starting: 
	I0421 19:59:35.336171   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:59:35.336191   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:59:35.351703   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0421 19:59:35.352186   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:59:35.352723   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:59:35.352752   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:59:35.353060   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:59:35.353252   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:35.353458   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:59:35.355260   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Stopped err=<nil>
	I0421 19:59:35.355290   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	W0421 19:59:35.355474   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:59:35.357145   62197 out.go:177] * Restarting existing kvm2 VM for "embed-certs-727235" ...
	I0421 19:59:35.358345   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Start
	I0421 19:59:35.358510   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring networks are active...
	I0421 19:59:35.359250   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network default is active
	I0421 19:59:35.359533   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network mk-embed-certs-727235 is active
	I0421 19:59:35.359951   62197 main.go:141] libmachine: (embed-certs-727235) Getting domain xml...
	I0421 19:59:35.360663   62197 main.go:141] libmachine: (embed-certs-727235) Creating domain...
	I0421 19:59:36.615174   62197 main.go:141] libmachine: (embed-certs-727235) Waiting to get IP...
	I0421 19:59:36.615997   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.616369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.616421   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.616351   63337 retry.go:31] will retry after 283.711872ms: waiting for machine to come up
	I0421 19:59:36.902032   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.902618   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.902655   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.902566   63337 retry.go:31] will retry after 336.383022ms: waiting for machine to come up
	I0421 19:59:37.240117   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.240613   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.240637   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.240565   63337 retry.go:31] will retry after 468.409378ms: waiting for machine to come up
	I0421 19:59:37.711065   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.711526   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.711556   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.711481   63337 retry.go:31] will retry after 457.618649ms: waiting for machine to come up
	I0421 19:59:38.170991   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.171513   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.171542   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.171450   63337 retry.go:31] will retry after 756.497464ms: waiting for machine to come up
	I0421 19:59:38.929950   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.930460   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.930495   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.930406   63337 retry.go:31] will retry after 667.654845ms: waiting for machine to come up
	I0421 19:59:39.599112   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:39.599566   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:39.599595   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:39.599514   63337 retry.go:31] will retry after 862.586366ms: waiting for machine to come up
	I0421 19:59:40.463709   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:40.464277   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:40.464311   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:40.464216   63337 retry.go:31] will retry after 1.446407672s: waiting for machine to come up
	I0421 19:59:41.912470   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:41.912935   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:41.912967   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:41.912893   63337 retry.go:31] will retry after 1.78143514s: waiting for machine to come up
	I0421 19:59:43.695369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:43.695781   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:43.695818   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:43.695761   63337 retry.go:31] will retry after 1.850669352s: waiting for machine to come up
	I0421 19:59:45.547626   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:45.548119   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:45.548147   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:45.548063   63337 retry.go:31] will retry after 2.399567648s: waiting for machine to come up
	I0421 19:59:47.949884   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:47.950410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:47.950435   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:47.950371   63337 retry.go:31] will retry after 2.461886164s: waiting for machine to come up
	I0421 19:59:50.413594   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:50.414039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:50.414075   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:50.413995   63337 retry.go:31] will retry after 3.706995804s: waiting for machine to come up
	I0421 19:59:54.123715   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124155   62197 main.go:141] libmachine: (embed-certs-727235) Found IP for machine: 192.168.72.9
	I0421 19:59:54.124185   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has current primary IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124194   62197 main.go:141] libmachine: (embed-certs-727235) Reserving static IP address...
	I0421 19:59:54.124657   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.124687   62197 main.go:141] libmachine: (embed-certs-727235) Reserved static IP address: 192.168.72.9
	I0421 19:59:54.124708   62197 main.go:141] libmachine: (embed-certs-727235) DBG | skip adding static IP to network mk-embed-certs-727235 - found existing host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"}
	I0421 19:59:54.124723   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Getting to WaitForSSH function...
	I0421 19:59:54.124737   62197 main.go:141] libmachine: (embed-certs-727235) Waiting for SSH to be available...
	I0421 19:59:54.126889   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127295   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.127327   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH client type: external
	I0421 19:59:54.127437   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa (-rw-------)
	I0421 19:59:54.127483   62197 main.go:141] libmachine: (embed-certs-727235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:59:54.127502   62197 main.go:141] libmachine: (embed-certs-727235) DBG | About to run SSH command:
	I0421 19:59:54.127521   62197 main.go:141] libmachine: (embed-certs-727235) DBG | exit 0
	I0421 19:59:54.254733   62197 main.go:141] libmachine: (embed-certs-727235) DBG | SSH cmd err, output: <nil>: 
	I0421 19:59:54.255110   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetConfigRaw
	I0421 19:59:54.255772   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.258448   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.258834   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.258858   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.259128   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:59:54.259326   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:59:54.259348   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:54.259572   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.262235   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262666   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.262695   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262773   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.262946   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263307   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.263484   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.263693   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.263712   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:59:54.379098   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:59:54.379135   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379445   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:54.379477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379680   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.382614   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383064   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.383095   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383211   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.383422   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383585   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383748   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.383896   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.384121   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.384147   62197 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-727235 && echo "embed-certs-727235" | sudo tee /etc/hostname
	I0421 19:59:54.511915   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-727235
	
	I0421 19:59:54.511944   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.515093   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515475   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.515508   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515663   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.515865   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516024   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.516275   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.516436   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.516452   62197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-727235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-727235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-727235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:59:54.638386   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:54.638426   62197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:59:54.638450   62197 buildroot.go:174] setting up certificates
	I0421 19:59:54.638460   62197 provision.go:84] configureAuth start
	I0421 19:59:54.638468   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.638764   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.641718   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.642084   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642297   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.644790   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645154   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.645182   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645300   62197 provision.go:143] copyHostCerts
	I0421 19:59:54.645353   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:59:54.645363   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:59:54.645423   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:59:54.645506   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:59:54.645514   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:59:54.645535   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:59:54.645587   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:59:54.645594   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:59:54.645613   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:59:54.645658   62197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.embed-certs-727235 san=[127.0.0.1 192.168.72.9 embed-certs-727235 localhost minikube]
	I0421 19:59:54.847892   62197 provision.go:177] copyRemoteCerts
	I0421 19:59:54.847950   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:59:54.847974   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.850561   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.850885   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.850916   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.851070   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.851261   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.851408   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.851542   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:54.939705   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0421 19:59:54.969564   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:59:54.996643   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:59:55.023261   62197 provision.go:87] duration metric: took 384.790427ms to configureAuth
	I0421 19:59:55.023285   62197 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:59:55.023469   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:59:55.023553   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.026429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026817   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.026851   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026984   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.027176   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027309   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.027605   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.027807   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.027831   62197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:59:55.329921   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:59:55.329950   62197 machine.go:97] duration metric: took 1.070609599s to provisionDockerMachine
	I0421 19:59:55.329967   62197 start.go:293] postStartSetup for "embed-certs-727235" (driver="kvm2")
	I0421 19:59:55.329986   62197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:59:55.330007   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.330422   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:59:55.330455   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.333062   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.333463   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333642   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.333820   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.333973   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.334132   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.422655   62197 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:59:55.428020   62197 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:59:55.428049   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:59:55.428128   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:59:55.428222   62197 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:59:55.428344   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:59:55.439964   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:59:55.469927   62197 start.go:296] duration metric: took 139.939886ms for postStartSetup
	I0421 19:59:55.469977   62197 fix.go:56] duration metric: took 20.134086048s for fixHost
	I0421 19:59:55.469997   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.472590   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.472954   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.472986   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.473194   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.473438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473616   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473813   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.473993   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.474209   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.474220   62197 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:59:55.583326   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713729595.559945159
	
	I0421 19:59:55.583347   62197 fix.go:216] guest clock: 1713729595.559945159
	I0421 19:59:55.583358   62197 fix.go:229] Guest: 2024-04-21 19:59:55.559945159 +0000 UTC Remote: 2024-04-21 19:59:55.469982444 +0000 UTC m=+302.687162567 (delta=89.962715ms)
	I0421 19:59:55.583413   62197 fix.go:200] guest clock delta is within tolerance: 89.962715ms
	I0421 19:59:55.583420   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 20.24754889s
	I0421 19:59:55.583466   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.583763   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:55.586342   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586700   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.586726   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586824   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587277   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587559   62197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:59:55.587601   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.587683   62197 ssh_runner.go:195] Run: cat /version.json
	I0421 19:59:55.587721   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.590094   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590379   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590476   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590505   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590641   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590721   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590747   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590817   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.590888   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590972   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591052   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.591128   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.591172   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591276   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.676275   62197 ssh_runner.go:195] Run: systemctl --version
	I0421 19:59:55.700845   62197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:59:55.849591   62197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:59:55.856384   62197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:59:55.856444   62197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:59:55.875575   62197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:59:55.875602   62197 start.go:494] detecting cgroup driver to use...
	I0421 19:59:55.875686   62197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:59:55.892497   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:59:55.907596   62197 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:59:55.907660   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:59:55.922805   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:59:55.938117   62197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:59:56.064198   62197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:59:56.239132   62197 docker.go:233] disabling docker service ...
	I0421 19:59:56.239210   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:59:56.256188   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:59:56.271951   62197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:59:56.409651   62197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:59:56.545020   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:59:56.560474   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:59:56.581091   62197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 19:59:56.581170   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.591783   62197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:59:56.591853   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.602656   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.613491   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.624452   62197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:59:56.635277   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.646299   62197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.665973   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.677014   62197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:59:56.687289   62197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:59:56.687340   62197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:59:56.702507   62197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:59:56.723008   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:59:56.879595   62197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:59:57.034078   62197 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:59:57.034150   62197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:59:57.039565   62197 start.go:562] Will wait 60s for crictl version
	I0421 19:59:57.039621   62197 ssh_runner.go:195] Run: which crictl
	I0421 19:59:57.044006   62197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:59:57.089252   62197 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:59:57.089340   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.121283   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.160334   62197 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 19:59:57.161976   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:57.164827   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165288   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:57.165321   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165536   62197 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0421 19:59:57.170481   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:59:57.185488   62197 kubeadm.go:877] updating cluster {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-
727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:59:57.185682   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:59:57.185736   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:59:57.237246   62197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 19:59:57.237303   62197 ssh_runner.go:195] Run: which lz4
	I0421 19:59:57.241760   62197 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 19:59:57.246777   62197 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:59:57.246817   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 19:59:58.900652   62197 crio.go:462] duration metric: took 1.658935699s to copy over tarball
	I0421 19:59:58.900742   62197 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:00:01.517236   62197 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.616462501s)
	I0421 20:00:01.517268   62197 crio.go:469] duration metric: took 2.616589126s to extract the tarball
	I0421 20:00:01.517279   62197 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:00:01.560109   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:00:01.610448   62197 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:00:01.610476   62197 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:00:01.610484   62197 kubeadm.go:928] updating node { 192.168.72.9 8443 v1.30.0 crio true true} ...
	I0421 20:00:01.610605   62197 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-727235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:00:01.610711   62197 ssh_runner.go:195] Run: crio config
	I0421 20:00:01.670151   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:01.670176   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:01.670188   62197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:00:01.670210   62197 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.9 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-727235 NodeName:embed-certs-727235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:00:01.670391   62197 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-727235"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:00:01.670479   62197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:00:01.683795   62197 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:00:01.683876   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:00:01.696350   62197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0421 20:00:01.717795   62197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:00:01.739491   62197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0421 20:00:01.761288   62197 ssh_runner.go:195] Run: grep 192.168.72.9	control-plane.minikube.internal$ /etc/hosts
	I0421 20:00:01.766285   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:00:01.781727   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:00:01.913030   62197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:00:01.934347   62197 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235 for IP: 192.168.72.9
	I0421 20:00:01.934375   62197 certs.go:194] generating shared ca certs ...
	I0421 20:00:01.934395   62197 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:00:01.934541   62197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:00:01.934615   62197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:00:01.934630   62197 certs.go:256] generating profile certs ...
	I0421 20:00:01.934729   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/client.key
	I0421 20:00:01.934796   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key.2840921d
	I0421 20:00:01.934854   62197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key
	I0421 20:00:01.934994   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:00:01.935032   62197 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:00:01.935045   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:00:01.935078   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:00:01.935110   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:00:01.935141   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:00:01.935197   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:00:01.936087   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:00:01.967117   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:00:02.003800   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:00:02.048029   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:00:02.089245   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0421 20:00:02.125745   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:00:02.163109   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:00:02.196506   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:00:02.229323   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:00:02.260648   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:00:02.290829   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:00:02.322222   62197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:00:02.344701   62197 ssh_runner.go:195] Run: openssl version
	I0421 20:00:02.352355   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:00:02.366812   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372857   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372947   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.380616   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:00:02.395933   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:00:02.411591   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418090   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418172   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.425721   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:00:02.443203   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:00:02.458442   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464317   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464386   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.471351   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:00:02.484925   62197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:00:02.491028   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 20:00:02.498970   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 20:00:02.506460   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 20:00:02.514257   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 20:00:02.521253   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 20:00:02.528828   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 20:00:02.537353   62197 kubeadm.go:391] StartCluster: {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727
235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:00:02.537443   62197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:00:02.537495   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.587891   62197 cri.go:89] found id: ""
	I0421 20:00:02.587996   62197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0421 20:00:02.601571   62197 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 20:00:02.601600   62197 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 20:00:02.601606   62197 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 20:00:02.601658   62197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 20:00:02.616596   62197 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:00:02.617728   62197 kubeconfig.go:125] found "embed-certs-727235" server: "https://192.168.72.9:8443"
	I0421 20:00:02.619968   62197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:00:02.634565   62197 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.9
	I0421 20:00:02.634618   62197 kubeadm.go:1154] stopping kube-system containers ...
	I0421 20:00:02.634633   62197 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0421 20:00:02.634699   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.685251   62197 cri.go:89] found id: ""
	I0421 20:00:02.685329   62197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 20:00:02.707720   62197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:00:02.722037   62197 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:00:02.722082   62197 kubeadm.go:156] found existing configuration files:
	
	I0421 20:00:02.722140   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:00:02.735544   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:00:02.735610   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:00:02.748027   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:00:02.759766   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:00:02.759841   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:00:02.773350   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.787463   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:00:02.787519   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.802575   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:00:02.816988   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:00:02.817045   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:00:02.830215   62197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:00:02.843407   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:03.501684   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.207411   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.448982   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.525835   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.656875   62197 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:00:04.656964   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.157388   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.657897   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.717895   62197 api_server.go:72] duration metric: took 1.061019387s to wait for apiserver process to appear ...
	I0421 20:00:05.717929   62197 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:00:05.717953   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:05.718558   62197 api_server.go:269] stopped: https://192.168.72.9:8443/healthz: Get "https://192.168.72.9:8443/healthz": dial tcp 192.168.72.9:8443: connect: connection refused
	I0421 20:00:06.218281   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.703744   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.703789   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.703806   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.722219   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.722249   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.722265   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.733030   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.733061   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:09.218765   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.224083   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.224115   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:09.718435   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.726603   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.726629   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:10.218162   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:10.224240   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 200:
	ok
	I0421 20:00:10.235750   62197 api_server.go:141] control plane version: v1.30.0
	I0421 20:00:10.235778   62197 api_server.go:131] duration metric: took 4.517842889s to wait for apiserver health ...
	I0421 20:00:10.235787   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:10.235793   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:10.237625   62197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:00:10.239279   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:00:10.262918   62197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:00:10.297402   62197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:00:10.310749   62197 system_pods.go:59] 8 kube-system pods found
	I0421 20:00:10.310805   62197 system_pods.go:61] "coredns-7db6d8ff4d-52bft" [85facf66-ffda-447c-8a04-ac95ac842470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0421 20:00:10.310818   62197 system_pods.go:61] "etcd-embed-certs-727235" [e7031073-0e50-431e-ab67-eda1fa4b9f18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0421 20:00:10.310833   62197 system_pods.go:61] "kube-apiserver-embed-certs-727235" [28be3882-5790-4754-9ef6-ec8f71367757] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0421 20:00:10.310847   62197 system_pods.go:61] "kube-controller-manager-embed-certs-727235" [83da56c1-3479-47f0-936f-ef9d0e4f455d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0421 20:00:10.310854   62197 system_pods.go:61] "kube-proxy-djqh8" [307fa1e9-345f-49b9-85e5-7b20b3275b45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0421 20:00:10.310865   62197 system_pods.go:61] "kube-scheduler-embed-certs-727235" [096371b2-a9b9-4867-a7da-b540432a973b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0421 20:00:10.310884   62197 system_pods.go:61] "metrics-server-569cc877fc-959cd" [146c80ec-6ae0-4ba3-b4be-df99fbf010a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:00:10.310901   62197 system_pods.go:61] "storage-provisioner" [054513d7-51f3-40eb-b875-b73d16c7405b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0421 20:00:10.310913   62197 system_pods.go:74] duration metric: took 13.478482ms to wait for pod list to return data ...
	I0421 20:00:10.310928   62197 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:00:10.315131   62197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:00:10.315170   62197 node_conditions.go:123] node cpu capacity is 2
	I0421 20:00:10.315187   62197 node_conditions.go:105] duration metric: took 4.252168ms to run NodePressure ...
	I0421 20:00:10.315210   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:10.620925   62197 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628865   62197 kubeadm.go:733] kubelet initialised
	I0421 20:00:10.628891   62197 kubeadm.go:734] duration metric: took 7.942591ms waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628899   62197 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:00:10.635290   62197 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:12.642618   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:14.648309   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:16.143559   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:16.143590   62197 pod_ready.go:81] duration metric: took 5.508275049s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:16.143602   62197 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:18.151189   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:20.152541   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.153814   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.649883   62197 pod_ready.go:92] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.649903   62197 pod_ready.go:81] duration metric: took 6.506293522s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.649912   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655444   62197 pod_ready.go:92] pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.655460   62197 pod_ready.go:81] duration metric: took 5.541421ms for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655468   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660078   62197 pod_ready.go:92] pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.660094   62197 pod_ready.go:81] duration metric: took 4.62017ms for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660102   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664789   62197 pod_ready.go:92] pod "kube-proxy-djqh8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.664808   62197 pod_ready.go:81] duration metric: took 4.700876ms for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664816   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668836   62197 pod_ready.go:92] pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.668851   62197 pod_ready.go:81] duration metric: took 4.029823ms for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668858   62197 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:24.676797   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:26.678669   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:29.175261   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:31.176580   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:33.677232   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:36.176401   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:38.678477   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:40.679096   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:43.178439   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:45.675906   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:47.676304   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:49.678715   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:52.176666   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:54.177353   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:56.677078   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:58.680937   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:01.175866   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:03.177322   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:05.676551   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:08.176504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:10.675324   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:12.679609   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:15.177636   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:17.177938   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:19.676849   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:21.677530   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:23.679352   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:26.176177   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:28.676123   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:30.677770   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:33.176672   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:35.675473   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:37.676094   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:40.177351   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:42.675765   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:44.677504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:47.178728   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:49.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:51.676977   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:53.677967   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:56.177161   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:58.675893   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:00.676490   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:03.175994   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:05.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:08.176147   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:10.676394   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:13.176425   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:15.178380   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:17.677109   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:20.174895   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:22.176464   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:24.177654   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:26.675586   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:28.676639   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:31.176664   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:33.677030   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:36.176792   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:38.176920   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:40.180665   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:42.678395   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:45.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:47.675740   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:49.676127   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:52.179886   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:54.675602   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:56.677577   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:58.681540   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:01.179494   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:03.676002   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:06.178560   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:08.676363   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:11.176044   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:13.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:15.676011   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:17.678133   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:20.177064   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:22.676179   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:25.176206   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:27.176706   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:29.177019   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:31.677239   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:33.679396   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:36.176193   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:38.176619   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:40.676129   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:42.677052   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:44.679521   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:47.175636   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:49.176114   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:51.676482   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:54.176228   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:56.675340   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:58.676581   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:01.175469   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:03.675918   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:05.677443   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:08.175700   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:10.175971   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:12.176364   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:14.675544   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:16.677069   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:19.178329   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:21.677217   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:22.669233   62197 pod_ready.go:81] duration metric: took 4m0.000357215s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" ...
	E0421 20:04:22.669279   62197 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0421 20:04:22.669298   62197 pod_ready.go:38] duration metric: took 4m12.040390946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:04:22.669328   62197 kubeadm.go:591] duration metric: took 4m20.067715018s to restartPrimaryControlPlane
	W0421 20:04:22.669388   62197 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0421 20:04:22.669420   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	
	
	==> CRI-O <==
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.915470156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729864915445728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fe3a471-dbb6-4e52-a28f-6365500f3fb2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.916255035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c127afa-96d8-417a-8078-51c3b7dfbc1c name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.916339147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c127afa-96d8-417a-8078-51c3b7dfbc1c name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.916515363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c127afa-96d8-417a-8078-51c3b7dfbc1c name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.961367392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ad55334-ed8c-445d-b4c0-e4f451df19ec name=/runtime.v1.RuntimeService/Version
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.961474001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ad55334-ed8c-445d-b4c0-e4f451df19ec name=/runtime.v1.RuntimeService/Version
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.962863983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e94f582-09f3-4745-ac78-68372bf463c9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.963521807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729864963495727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e94f582-09f3-4745-ac78-68372bf463c9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.964001198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21e62012-83ff-4521-ab6f-9cbac1705f2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.964121985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21e62012-83ff-4521-ab6f-9cbac1705f2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:24 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:24.964313925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21e62012-83ff-4521-ab6f-9cbac1705f2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.020280220Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2887427-4640-4787-94ea-81aa3e750bef name=/runtime.v1.RuntimeService/Version
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.020389995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2887427-4640-4787-94ea-81aa3e750bef name=/runtime.v1.RuntimeService/Version
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.022498322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd7072c3-e727-49b9-96d7-9cda788b19c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.026451675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729865026428101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd7072c3-e727-49b9-96d7-9cda788b19c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.027160058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba61119b-88fa-4b3d-8d11-8d8f3b32ddf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.027217716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba61119b-88fa-4b3d-8d11-8d8f3b32ddf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.027451365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba61119b-88fa-4b3d-8d11-8d8f3b32ddf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.068673677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac49cf89-6556-470d-8872-62c050779894 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.068772159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac49cf89-6556-470d-8872-62c050779894 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.070280956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5812aa14-ef1b-47cb-b5ff-825fc94338ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.070762840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729865070736377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5812aa14-ef1b-47cb-b5ff-825fc94338ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.072326982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=367c7df7-0b96-498c-9380-c83df6fae34a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.072483928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=367c7df7-0b96-498c-9380-c83df6fae34a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:04:25 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:04:25.072656089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=367c7df7-0b96-498c-9380-c83df6fae34a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34c9445657c0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f564ef62c5d36       storage-provisioner
	bf807fae6eb29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3fe17f468b22e       coredns-7db6d8ff4d-lbtcm
	bd3a5c5cb97eb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   e490374db989d       coredns-7db6d8ff4d-xmhm6
	1b52f85f70be5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   4bd10da45563f       kube-proxy-wmv4v
	9a048c9824374       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   55ef630000fd7       kube-scheduler-default-k8s-diff-port-167454
	7242f34bc2713       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   457d67ee31ff2       etcd-default-k8s-diff-port-167454
	ae1315d3ba927       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   1d51cf2e60f26       kube-controller-manager-default-k8s-diff-port-167454
	b19255e9ba536       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   1806e64fe49f1       kube-apiserver-default-k8s-diff-port-167454
	
	
	==> coredns [bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-167454
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-167454
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=default-k8s-diff-port-167454
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_55_07_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:55:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-167454
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:04:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:00:33 +0000   Sun, 21 Apr 2024 19:55:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:00:33 +0000   Sun, 21 Apr 2024 19:55:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:00:33 +0000   Sun, 21 Apr 2024 19:55:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:00:33 +0000   Sun, 21 Apr 2024 19:55:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.23
	  Hostname:    default-k8s-diff-port-167454
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 967637a2b8bd47528fa6b40636da4a88
	  System UUID:                967637a2-b8bd-4752-8fa6-b40636da4a88
	  Boot ID:                    c12dc575-9a3c-4272-a89d-76f3bb51232a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lbtcm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-xmhm6                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-default-k8s-diff-port-167454                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-167454             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-167454    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-wmv4v                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-167454             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-55czz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node default-k8s-diff-port-167454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node default-k8s-diff-port-167454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node default-k8s-diff-port-167454 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node default-k8s-diff-port-167454 event: Registered Node default-k8s-diff-port-167454 in Controller
	
	
	==> dmesg <==
	[  +0.043455] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.957896] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.650763] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.780004] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr21 19:50] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.063734] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068233] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.191799] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.166148] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.330131] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +5.314561] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.069862] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.551101] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +5.626188] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.334779] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.613283] kauditd_printk_skb: 27 callbacks suppressed
	[Apr21 19:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.851109] systemd-fstab-generator[3622]: Ignoring "noauto" option for root device
	[Apr21 19:55] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.075148] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[ +13.599176] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.354561] systemd-fstab-generator[4215]: Ignoring "noauto" option for root device
	[Apr21 19:56] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110] <==
	{"level":"info","ts":"2024-04-21T19:55:01.129157Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T19:55:01.124241Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.23:2380"}
	{"level":"info","ts":"2024-04-21T19:55:01.129433Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.23:2380"}
	{"level":"info","ts":"2024-04-21T19:55:01.125157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad switched to configuration voters=(15337762278866062253)"}
	{"level":"info","ts":"2024-04-21T19:55:01.129907Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9a2bb6132dcffac6","local-member-id":"d4daad8799328bad","added-peer-id":"d4daad8799328bad","added-peer-peer-urls":["https://192.168.61.23:2380"]}
	{"level":"info","ts":"2024-04-21T19:55:01.12626Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T19:55:01.871892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-21T19:55:01.872037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-21T19:55:01.872143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad received MsgPreVoteResp from d4daad8799328bad at term 1"}
	{"level":"info","ts":"2024-04-21T19:55:01.872176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad became candidate at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.8722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad received MsgVoteResp from d4daad8799328bad at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.872227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad became leader at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.872258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4daad8799328bad elected leader d4daad8799328bad at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.877256Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.880135Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4daad8799328bad","local-member-attributes":"{Name:default-k8s-diff-port-167454 ClientURLs:[https://192.168.61.23:2379]}","request-path":"/0/members/d4daad8799328bad/attributes","cluster-id":"9a2bb6132dcffac6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:55:01.880288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:55:01.886689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T19:55:01.889561Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9a2bb6132dcffac6","local-member-id":"d4daad8799328bad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.922821Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.889584Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:55:01.896114Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:55:01.944563Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T19:55:01.94474Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.951696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.23:2379"}
	{"level":"info","ts":"2024-04-21T20:00:03.299858Z","caller":"traceutil/trace.go:171","msg":"trace[132640198] transaction","detail":"{read_only:false; response_revision:678; number_of_response:1; }","duration":"220.761303ms","start":"2024-04-21T20:00:03.079046Z","end":"2024-04-21T20:00:03.299807Z","steps":["trace[132640198] 'process raft request'  (duration: 220.505565ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:04:25 up 14 min,  0 users,  load average: 0.23, 0.24, 0.18
	Linux default-k8s-diff-port-167454 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18] <==
	I0421 19:58:22.590214       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:00:03.422201       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:00:03.422362       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0421 20:00:04.422846       1 handler_proxy.go:93] no RequestInfo found in the context
	W0421 20:00:04.422995       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:00:04.423115       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:00:04.423130       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0421 20:00:04.423222       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:00:04.424543       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:01:04.423967       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:01:04.424160       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:01:04.424172       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:01:04.425436       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:01:04.425547       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:01:04.425556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:03:04.425204       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:03:04.425551       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:03:04.425580       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:03:04.425677       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:03:04.425792       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:03:04.426987       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5] <==
	I0421 19:58:52.462312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="125.921µs"
	E0421 19:59:18.926414       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 19:59:19.384949       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 19:59:48.933366       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 19:59:49.394018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:00:18.939481       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:00:19.405921       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:00:48.944849       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:00:49.416011       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:01:17.468136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="520.939µs"
	E0421 20:01:18.952557       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:01:19.426152       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:01:28.458182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="200.682µs"
	E0421 20:01:48.958669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:01:49.434747       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:02:18.966014       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:02:19.444528       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:02:48.971695       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:02:49.452994       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:03:18.978328       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:03:19.462455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:03:48.984005       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:03:49.471491       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:04:18.990246       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:04:19.481996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928] <==
	I0421 19:55:20.107566       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:55:20.138623       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.23"]
	I0421 19:55:20.341122       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:55:20.341174       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:55:20.341192       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:55:20.352265       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:55:20.352455       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:55:20.352471       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:55:20.354020       1 config.go:192] "Starting service config controller"
	I0421 19:55:20.354132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:55:20.354222       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:55:20.354229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:55:20.354549       1 config.go:319] "Starting node config controller"
	I0421 19:55:20.354555       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:55:20.455028       1 shared_informer.go:320] Caches are synced for node config
	I0421 19:55:20.455159       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:55:20.455197       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37] <==
	W0421 19:55:04.488334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 19:55:04.488590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 19:55:04.510362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:55:04.510428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:55:04.510484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:55:04.510526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:55:04.561508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:55:04.561613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 19:55:04.695751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 19:55:04.695810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 19:55:04.705347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:55:04.705401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:55:04.784284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 19:55:04.784391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 19:55:04.818722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 19:55:04.818786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 19:55:04.831349       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:55:04.832148       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:55:04.841590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 19:55:04.841726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 19:55:04.851870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 19:55:04.852018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:55:04.863378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 19:55:04.863529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0421 19:55:07.131694       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:02:06 default-k8s-diff-port-167454 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:02:06 default-k8s-diff-port-167454 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:02:06 default-k8s-diff-port-167454 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:02:06 default-k8s-diff-port-167454 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:02:07 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:02:07.441453    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:02:21 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:02:21.443266    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:02:35 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:02:35.441027    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:02:47 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:02:47.440958    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:03:02 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:03:02.443003    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:03:06 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:03:06.489981    3952 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:03:06 default-k8s-diff-port-167454 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:03:06 default-k8s-diff-port-167454 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:03:06 default-k8s-diff-port-167454 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:03:06 default-k8s-diff-port-167454 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:03:13 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:03:13.442022    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:03:24 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:03:24.441469    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:03:36 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:03:36.441948    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:03:50 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:03:50.443230    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:04:03 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:04:03.441993    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:04:06 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:04:06.489458    3952 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:04:06 default-k8s-diff-port-167454 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:04:06 default-k8s-diff-port-167454 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:04:06 default-k8s-diff-port-167454 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:04:06 default-k8s-diff-port-167454 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:04:17 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:04:17.441679    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	
	
	==> storage-provisioner [34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb] <==
	I0421 19:55:22.174806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 19:55:22.196796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 19:55:22.196864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 19:55:22.213435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff2f4d85-462c-45eb-b00e-b06214698f91", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-167454_57532d66-9eeb-40d6-bd5e-439b95854ee7 became leader
	I0421 19:55:22.215758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 19:55:22.217801       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167454_57532d66-9eeb-40d6-bd5e-439b95854ee7!
	I0421 19:55:22.319147       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167454_57532d66-9eeb-40d6-bd5e-439b95854ee7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-55czz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 describe pod metrics-server-569cc877fc-55czz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-167454 describe pod metrics-server-569cc877fc-55czz: exit status 1 (73.267784ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-55czz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-167454 describe pod metrics-server-569cc877fc-55czz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
E0421 19:59:06.204728   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
E0421 19:59:12.257634   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
E0421 20:01:09.207892   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
E0421 20:04:06.205312   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
E0421 20:06:09.208549   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (247.501067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-867585" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (246.871645ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-867585 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-867585        | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-167454       | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC | 21 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-597568                  | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC | 21 Apr 24 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:47 UTC |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-364614             | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-364614                  | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-364614 image list                           | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-411651 | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | disable-driver-mounts-411651                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-727235            | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC | 21 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-727235                 | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC | 21 Apr 24 20:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:54:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:54:52.830637   62197 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:54:52.830912   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.830926   62197 out.go:304] Setting ErrFile to fd 2...
	I0421 19:54:52.830932   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.831126   62197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:54:52.831742   62197 out.go:298] Setting JSON to false
	I0421 19:54:52.832674   62197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5791,"bootTime":1713723502,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:54:52.832739   62197 start.go:139] virtualization: kvm guest
	I0421 19:54:52.835455   62197 out.go:177] * [embed-certs-727235] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:54:52.837412   62197 notify.go:220] Checking for updates...
	I0421 19:54:52.837418   62197 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:54:52.839465   62197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:54:52.841250   62197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:54:52.842894   62197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:54:52.844479   62197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:54:52.845967   62197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:54:52.847931   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:54:52.848387   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.848464   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.864769   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0421 19:54:52.865105   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.865623   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.865642   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.865919   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.866109   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.866305   62197 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:54:52.866589   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.866622   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.880497   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0421 19:54:52.880874   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.881355   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.881380   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.881691   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.881883   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.916395   62197 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:54:52.917730   62197 start.go:297] selected driver: kvm2
	I0421 19:54:52.917753   62197 start.go:901] validating driver "kvm2" against &{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.917858   62197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:54:52.918512   62197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.918585   62197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:54:52.933446   62197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:54:52.933791   62197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:54:52.933845   62197 cni.go:84] Creating CNI manager for ""
	I0421 19:54:52.933858   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:54:52.933901   62197 start.go:340] cluster config:
	{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.933981   62197 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.936907   62197 out.go:177] * Starting "embed-certs-727235" primary control-plane node in "embed-certs-727235" cluster
	I0421 19:54:52.938596   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:54:52.938626   62197 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:54:52.938633   62197 cache.go:56] Caching tarball of preloaded images
	I0421 19:54:52.938690   62197 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:54:52.938701   62197 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 19:54:52.938791   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:54:52.938960   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:54:52.938995   62197 start.go:364] duration metric: took 19.691µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:54:52.939006   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:54:52.939011   62197 fix.go:54] fixHost starting: 
	I0421 19:54:52.939248   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.939274   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.953191   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0421 19:54:52.953602   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.953994   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.954024   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.954454   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.954602   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.954750   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:54:52.956153   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Running err=<nil>
	W0421 19:54:52.956167   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:54:52.958195   62197 out.go:177] * Updating the running kvm2 "embed-certs-727235" VM ...
	I0421 19:54:52.959459   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:54:52.959476   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.959678   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:54:52.961705   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962141   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:51:24 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:54:52.962165   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962245   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:54:52.962392   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962555   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962682   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:54:52.962853   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:54:52.963028   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:54:52.963038   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:54:55.842410   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:58.070842   57617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.405000958s)
	I0421 19:54:58.070936   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:54:58.089413   57617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:54:58.101786   57617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:54:58.114021   57617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:54:58.114065   57617 kubeadm.go:156] found existing configuration files:
	
	I0421 19:54:58.114126   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0421 19:54:58.124228   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:54:58.124296   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:54:58.135037   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0421 19:54:58.144890   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:54:58.144958   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:54:58.155403   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.165155   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:54:58.165207   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.175703   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0421 19:54:58.185428   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:54:58.185521   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:54:58.195328   57617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:54:58.257787   57617 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:54:58.257868   57617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:54:58.432626   57617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:54:58.432766   57617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:54:58.432943   57617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:54:58.677807   57617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:54:58.679655   57617 out.go:204]   - Generating certificates and keys ...
	I0421 19:54:58.679763   57617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:54:58.679856   57617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:54:58.679974   57617 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:54:58.680053   57617 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:54:58.680125   57617 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:54:58.680177   57617 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:54:58.681691   57617 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:54:58.682034   57617 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:54:58.682257   57617 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:54:58.682547   57617 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:54:58.682770   57617 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:54:58.682840   57617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:54:58.938223   57617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:54:58.989244   57617 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:54:59.196060   57617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:54:59.378330   57617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:54:59.435654   57617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:54:59.436159   57617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:54:59.440839   57617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:54:58.914303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:59.442694   57617 out.go:204]   - Booting up control plane ...
	I0421 19:54:59.442826   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:54:59.442942   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:54:59.443122   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:54:59.466298   57617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:54:59.469370   57617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:54:59.469656   57617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:54:59.622281   57617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:54:59.622433   57617 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:55:00.123513   57617 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.401309ms
	I0421 19:55:00.123606   57617 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:55:05.627324   57617 kubeadm.go:309] [api-check] The API server is healthy after 5.503528473s
	I0421 19:55:05.644392   57617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:55:05.666212   57617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:55:05.696150   57617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:55:05.696487   57617 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-167454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:55:05.709873   57617 kubeadm.go:309] [bootstrap-token] Using token: ypxtpg.5u6l3v2as04iv2aj
	I0421 19:55:05.711407   57617 out.go:204]   - Configuring RBAC rules ...
	I0421 19:55:05.711556   57617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:55:05.721552   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:55:05.735168   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:55:05.739580   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:55:05.743466   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:55:05.747854   57617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:55:06.034775   57617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:55:06.468585   57617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:55:07.036924   57617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:55:07.036983   57617 kubeadm.go:309] 
	I0421 19:55:07.037040   57617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:55:07.037060   57617 kubeadm.go:309] 
	I0421 19:55:07.037199   57617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:55:07.037218   57617 kubeadm.go:309] 
	I0421 19:55:07.037259   57617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:55:07.037348   57617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:55:07.037419   57617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:55:07.037433   57617 kubeadm.go:309] 
	I0421 19:55:07.037526   57617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:55:07.037540   57617 kubeadm.go:309] 
	I0421 19:55:07.037604   57617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:55:07.037615   57617 kubeadm.go:309] 
	I0421 19:55:07.037681   57617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:55:07.037760   57617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:55:07.037823   57617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:55:07.037828   57617 kubeadm.go:309] 
	I0421 19:55:07.037899   57617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:55:07.037964   57617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:55:07.037970   57617 kubeadm.go:309] 
	I0421 19:55:07.038098   57617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038255   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 19:55:07.038283   57617 kubeadm.go:309] 	--control-plane 
	I0421 19:55:07.038288   57617 kubeadm.go:309] 
	I0421 19:55:07.038400   57617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:55:07.038411   57617 kubeadm.go:309] 
	I0421 19:55:07.038517   57617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038672   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 19:55:07.038956   57617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:55:07.038982   57617 cni.go:84] Creating CNI manager for ""
	I0421 19:55:07.038998   57617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:55:07.040852   57617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 19:55:04.994338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:07.042257   57617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 19:55:07.057287   57617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 19:55:07.078228   57617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:55:07.078330   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.078390   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167454 minikube.k8s.io/updated_at=2024_04_21T19_55_07_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=default-k8s-diff-port-167454 minikube.k8s.io/primary=true
	I0421 19:55:07.128726   57617 ops.go:34] apiserver oom_adj: -16
	I0421 19:55:07.277531   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.778563   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.066312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:08.278441   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.778051   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.277768   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.777868   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.278602   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.777607   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.278260   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.777609   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.277684   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.778116   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.146347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:17.218265   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:13.278439   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:13.777901   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.278214   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.777957   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.278369   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.778113   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.277991   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.778322   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.278350   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.778144   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.278465   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.778049   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.278228   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.777615   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.945015   57617 kubeadm.go:1107] duration metric: took 12.866746923s to wait for elevateKubeSystemPrivileges
	W0421 19:55:19.945062   57617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:55:19.945073   57617 kubeadm.go:393] duration metric: took 5m11.113256567s to StartCluster
	I0421 19:55:19.945094   57617 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.945186   57617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:55:19.947618   57617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.947919   57617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.23 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:55:19.949819   57617 out.go:177] * Verifying Kubernetes components...
	I0421 19:55:19.947983   57617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:55:19.948132   57617 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:55:19.951664   57617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:55:19.951671   57617 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951685   57617 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951708   57617 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951718   57617 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-167454"
	I0421 19:55:19.951720   57617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167454"
	W0421 19:55:19.951730   57617 addons.go:243] addon storage-provisioner should already be in state true
	I0421 19:55:19.951741   57617 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.951753   57617 addons.go:243] addon metrics-server should already be in state true
	I0421 19:55:19.951766   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.951781   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.952059   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952095   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952147   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952169   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952170   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952378   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.969767   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0421 19:55:19.970291   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.971023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.971053   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.971517   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.971747   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.971966   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0421 19:55:19.972325   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0421 19:55:19.972539   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.972691   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.973050   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973075   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973313   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973336   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973408   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973712   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973986   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974023   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.974287   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974321   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.976061   57617 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.976086   57617 addons.go:243] addon default-storageclass should already be in state true
	I0421 19:55:19.976116   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.976473   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.976513   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.989851   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I0421 19:55:19.990053   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0421 19:55:19.990494   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.990573   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.991023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991039   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991170   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991197   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991380   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991527   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991556   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.991713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.993398   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995704   57617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:55:19.994181   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995594   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0421 19:55:19.997429   57617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:19.997450   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:55:19.997470   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:19.998995   57617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0421 19:55:19.997642   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.000129   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000728   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.000743   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000638   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.000805   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 19:55:20.000816   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 19:55:20.000826   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.000991   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.001147   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.001328   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.001340   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.001362   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.001763   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.002313   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:20.002335   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:20.003803   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004388   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.004404   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004602   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.004792   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.004958   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.005128   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.018016   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0421 19:55:20.018651   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.019177   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.019196   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.019422   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.019702   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:20.021066   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:20.021324   57617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.021340   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:55:20.021357   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.024124   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024503   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.024524   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024686   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.024848   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.025030   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.025184   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.214689   57617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:55:20.264530   57617 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.281976   57617 node_ready.go:49] node "default-k8s-diff-port-167454" has status "Ready":"True"
	I0421 19:55:20.281999   57617 node_ready.go:38] duration metric: took 17.434628ms for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.282007   57617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:20.297108   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:20.386102   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.408686   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 19:55:20.408706   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0421 19:55:20.416022   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:20.455756   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 19:55:20.455778   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 19:55:20.603535   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.603559   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 19:55:20.690543   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.842718   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.842753   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843074   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843148   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843163   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.843172   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.843191   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843475   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843511   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843525   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.856272   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.856294   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.856618   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.856636   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.856673   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550249   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13418491s)
	I0421 19:55:21.550297   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550305   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550577   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550654   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:21.550663   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550675   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550684   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550928   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550946   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.853935   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.853970   57617 pod_ready.go:81] duration metric: took 1.556832657s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.853984   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924815   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.924845   57617 pod_ready.go:81] duration metric: took 70.852928ms for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924857   57617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955217   57617 pod_ready.go:92] pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.955246   57617 pod_ready.go:81] duration metric: took 30.380253ms for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955259   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975065   57617 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.975094   57617 pod_ready.go:81] duration metric: took 19.818959ms for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975106   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981884   57617 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.981907   57617 pod_ready.go:81] duration metric: took 6.791796ms for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981919   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.001934   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311352362s)
	I0421 19:55:22.001984   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002000   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002311   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002369   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002330   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.002410   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002434   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002649   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002689   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002705   57617 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-167454"
	I0421 19:55:22.002713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.005036   57617 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0421 19:55:22.006362   57617 addons.go:505] duration metric: took 2.058380621s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0421 19:55:22.269772   57617 pod_ready.go:92] pod "kube-proxy-wmv4v" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.269798   57617 pod_ready.go:81] duration metric: took 287.872366ms for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.269808   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668470   57617 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.668494   57617 pod_ready.go:81] duration metric: took 398.679544ms for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668502   57617 pod_ready.go:38] duration metric: took 2.386486578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:22.668516   57617 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:55:22.668560   57617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:55:22.688191   57617 api_server.go:72] duration metric: took 2.740229162s to wait for apiserver process to appear ...
	I0421 19:55:22.688224   57617 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:55:22.688244   57617 api_server.go:253] Checking apiserver healthz at https://192.168.61.23:8444/healthz ...
	I0421 19:55:22.699424   57617 api_server.go:279] https://192.168.61.23:8444/healthz returned 200:
	ok
	I0421 19:55:22.700614   57617 api_server.go:141] control plane version: v1.30.0
	I0421 19:55:22.700636   57617 api_server.go:131] duration metric: took 12.404937ms to wait for apiserver health ...
	I0421 19:55:22.700643   57617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:55:22.873594   57617 system_pods.go:59] 9 kube-system pods found
	I0421 19:55:22.873622   57617 system_pods.go:61] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:22.873631   57617 system_pods.go:61] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:22.873635   57617 system_pods.go:61] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:22.873639   57617 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:22.873643   57617 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:22.873647   57617 system_pods.go:61] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:22.873651   57617 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:22.873657   57617 system_pods.go:61] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:22.873698   57617 system_pods.go:61] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:22.873717   57617 system_pods.go:74] duration metric: took 173.068164ms to wait for pod list to return data ...
	I0421 19:55:22.873731   57617 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:55:23.068026   57617 default_sa.go:45] found service account: "default"
	I0421 19:55:23.068053   57617 default_sa.go:55] duration metric: took 194.313071ms for default service account to be created ...
	I0421 19:55:23.068064   57617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:55:23.272118   57617 system_pods.go:86] 9 kube-system pods found
	I0421 19:55:23.272148   57617 system_pods.go:89] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:23.272156   57617 system_pods.go:89] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:23.272162   57617 system_pods.go:89] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:23.272168   57617 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:23.272173   57617 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:23.272178   57617 system_pods.go:89] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:23.272184   57617 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:23.272194   57617 system_pods.go:89] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:23.272200   57617 system_pods.go:89] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:23.272212   57617 system_pods.go:126] duration metric: took 204.142116ms to wait for k8s-apps to be running ...
	I0421 19:55:23.272231   57617 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:55:23.272283   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:23.288800   57617 system_svc.go:56] duration metric: took 16.572799ms WaitForService to wait for kubelet
	I0421 19:55:23.288829   57617 kubeadm.go:576] duration metric: took 3.340874079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:55:23.288851   57617 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:55:23.469503   57617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:55:23.469541   57617 node_conditions.go:123] node cpu capacity is 2
	I0421 19:55:23.469554   57617 node_conditions.go:105] duration metric: took 180.696423ms to run NodePressure ...
	I0421 19:55:23.469567   57617 start.go:240] waiting for startup goroutines ...
	I0421 19:55:23.469576   57617 start.go:245] waiting for cluster config update ...
	I0421 19:55:23.469590   57617 start.go:254] writing updated cluster config ...
	I0421 19:55:23.469941   57617 ssh_runner.go:195] Run: rm -f paused
	I0421 19:55:23.521989   57617 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:55:23.524030   57617 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-167454" cluster and "default" namespace by default
	I0421 19:55:23.298271   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:29.590689   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:55:29.590767   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:55:29.592377   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:29.592430   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:29.592527   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:29.592662   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:29.592794   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:29.592892   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:29.595022   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:29.595115   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:29.595190   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:29.595263   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:29.595311   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:29.595375   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:29.595433   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:29.595520   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:29.595598   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:29.595680   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:29.595775   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:29.595824   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:29.595875   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:29.595919   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:29.595982   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:29.596047   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:29.596091   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:29.596174   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:29.596256   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:29.596301   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:29.596367   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.598820   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:29.598926   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:29.598993   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:29.599054   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:29.599162   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:29.599331   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:29.599418   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:55:29.599516   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599705   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.599772   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599936   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600041   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600191   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600244   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600389   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600481   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600654   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600669   58211 kubeadm.go:309] 
	I0421 19:55:29.600702   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:55:29.600737   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:55:29.600743   58211 kubeadm.go:309] 
	I0421 19:55:29.600777   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:55:29.600810   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:55:29.600901   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:55:29.600908   58211 kubeadm.go:309] 
	I0421 19:55:29.601009   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:55:29.601057   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:55:29.601109   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:55:29.601118   58211 kubeadm.go:309] 
	I0421 19:55:29.601224   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:55:29.601323   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:55:29.601333   58211 kubeadm.go:309] 
	I0421 19:55:29.601485   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:55:29.601579   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:55:29.601646   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:55:29.601751   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:55:29.601835   58211 kubeadm.go:309] 
	W0421 19:55:29.601862   58211 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:55:29.601908   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:55:30.075850   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:30.092432   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:55:30.103405   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:55:30.103429   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:55:30.103473   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:55:30.114018   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:55:30.114073   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:55:30.124410   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:55:30.134021   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:55:30.134076   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:55:30.143946   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.153926   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:55:30.153973   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.164013   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:55:30.173459   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:55:30.173512   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:55:30.184067   58211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:55:30.259108   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:30.259195   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:30.422144   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:30.422317   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:30.422497   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:30.619194   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:30.621135   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:30.621258   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:30.621314   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:30.621437   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:30.621530   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:30.621617   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:30.621956   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:30.622478   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:30.623068   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:30.623509   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:30.624072   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:30.624110   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:30.624183   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:30.871049   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:30.931466   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:31.088680   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:31.275358   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:31.305344   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:31.307220   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:31.307289   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:31.484365   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.378329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:32.450259   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:31.486164   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:31.486312   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:31.492868   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:31.494787   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:31.496104   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:31.500190   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:38.530370   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:41.602365   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:47.682316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:50.754312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:56.834318   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:59.906313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:05.986294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:09.058300   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:11.503250   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:56:11.503361   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:11.503618   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:15.138313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:16.504469   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:16.504743   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:18.210376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:24.290344   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:27.366276   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:26.505496   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:26.505769   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:33.442294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:36.514319   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:42.594275   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:45.670298   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:46.505851   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:46.506140   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:51.746306   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:54.818338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:00.898357   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:03.974324   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:10.050360   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:13.122376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:19.202341   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:22.274304   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:26.505043   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:57:26.505356   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:57:26.505385   58211 kubeadm.go:309] 
	I0421 19:57:26.505436   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:57:26.505495   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:57:26.505505   58211 kubeadm.go:309] 
	I0421 19:57:26.505553   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:57:26.505596   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:57:26.505720   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:57:26.505730   58211 kubeadm.go:309] 
	I0421 19:57:26.505839   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:57:26.505883   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:57:26.505912   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:57:26.505919   58211 kubeadm.go:309] 
	I0421 19:57:26.506020   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:57:26.506152   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:57:26.506181   58211 kubeadm.go:309] 
	I0421 19:57:26.506346   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:57:26.506480   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:57:26.506581   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:57:26.506702   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:57:26.506721   58211 kubeadm.go:309] 
	I0421 19:57:26.507115   58211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:57:26.507237   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:57:26.507330   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:57:26.507409   58211 kubeadm.go:393] duration metric: took 8m0.981544676s to StartCluster
	I0421 19:57:26.507461   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:57:26.507523   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:57:26.556647   58211 cri.go:89] found id: ""
	I0421 19:57:26.556676   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.556687   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:57:26.556695   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:57:26.556748   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:57:26.595025   58211 cri.go:89] found id: ""
	I0421 19:57:26.595055   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.595064   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:57:26.595069   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:57:26.595143   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:57:26.634084   58211 cri.go:89] found id: ""
	I0421 19:57:26.634115   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.634126   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:57:26.634134   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:57:26.634201   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:57:26.672409   58211 cri.go:89] found id: ""
	I0421 19:57:26.672439   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.672450   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:57:26.672458   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:57:26.672515   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:57:26.720123   58211 cri.go:89] found id: ""
	I0421 19:57:26.720151   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.720159   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:57:26.720165   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:57:26.720219   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:57:26.756889   58211 cri.go:89] found id: ""
	I0421 19:57:26.756918   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.756929   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:57:26.756936   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:57:26.757044   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:57:26.802160   58211 cri.go:89] found id: ""
	I0421 19:57:26.802188   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.802197   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:57:26.802204   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:57:26.802264   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:57:26.841543   58211 cri.go:89] found id: ""
	I0421 19:57:26.841567   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.841574   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:57:26.841583   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:57:26.841598   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:57:26.894547   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:57:26.894575   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:57:26.909052   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:57:26.909077   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:57:27.002127   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:57:27.002150   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:57:27.002166   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:57:27.120460   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:57:27.120494   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0421 19:57:27.170858   58211 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:57:27.170914   58211 out.go:239] * 
	W0421 19:57:27.170969   58211 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.170990   58211 out.go:239] * 
	W0421 19:57:27.171868   58211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:57:27.174893   58211 out.go:177] 
	W0421 19:57:27.176215   58211 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.176288   58211 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:57:27.176319   58211 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:57:27.177779   58211 out.go:177] 
	I0421 19:57:28.354287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:31.426307   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:37.506302   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:40.578329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:46.658286   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:49.730290   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:55.810303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:58.882287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:04.962316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:08.038328   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:14.114282   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:17.186379   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:23.270347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:26.338313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:32.418266   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:35.494377   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:41.570277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:44.642263   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:50.722316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:53.794367   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:59.874261   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:02.946333   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:09.026296   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:12.098331   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:18.178280   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:21.250268   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:27.330277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:30.331351   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:30.331383   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331744   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:30.331770   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331983   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:30.333880   62197 machine.go:97] duration metric: took 4m37.374404361s to provisionDockerMachine
	I0421 19:59:30.333921   62197 fix.go:56] duration metric: took 4m37.394910099s for fixHost
	I0421 19:59:30.333928   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 4m37.394926037s
	W0421 19:59:30.333945   62197 start.go:713] error starting host: provision: host is not running
	W0421 19:59:30.334039   62197 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0421 19:59:30.334070   62197 start.go:728] Will try again in 5 seconds ...
	I0421 19:59:35.335761   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:59:35.335860   62197 start.go:364] duration metric: took 61.013µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:59:35.335882   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:59:35.335890   62197 fix.go:54] fixHost starting: 
	I0421 19:59:35.336171   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:59:35.336191   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:59:35.351703   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0421 19:59:35.352186   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:59:35.352723   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:59:35.352752   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:59:35.353060   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:59:35.353252   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:35.353458   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:59:35.355260   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Stopped err=<nil>
	I0421 19:59:35.355290   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	W0421 19:59:35.355474   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:59:35.357145   62197 out.go:177] * Restarting existing kvm2 VM for "embed-certs-727235" ...
	I0421 19:59:35.358345   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Start
	I0421 19:59:35.358510   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring networks are active...
	I0421 19:59:35.359250   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network default is active
	I0421 19:59:35.359533   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network mk-embed-certs-727235 is active
	I0421 19:59:35.359951   62197 main.go:141] libmachine: (embed-certs-727235) Getting domain xml...
	I0421 19:59:35.360663   62197 main.go:141] libmachine: (embed-certs-727235) Creating domain...
	I0421 19:59:36.615174   62197 main.go:141] libmachine: (embed-certs-727235) Waiting to get IP...
	I0421 19:59:36.615997   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.616369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.616421   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.616351   63337 retry.go:31] will retry after 283.711872ms: waiting for machine to come up
	I0421 19:59:36.902032   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.902618   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.902655   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.902566   63337 retry.go:31] will retry after 336.383022ms: waiting for machine to come up
	I0421 19:59:37.240117   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.240613   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.240637   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.240565   63337 retry.go:31] will retry after 468.409378ms: waiting for machine to come up
	I0421 19:59:37.711065   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.711526   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.711556   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.711481   63337 retry.go:31] will retry after 457.618649ms: waiting for machine to come up
	I0421 19:59:38.170991   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.171513   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.171542   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.171450   63337 retry.go:31] will retry after 756.497464ms: waiting for machine to come up
	I0421 19:59:38.929950   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.930460   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.930495   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.930406   63337 retry.go:31] will retry after 667.654845ms: waiting for machine to come up
	I0421 19:59:39.599112   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:39.599566   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:39.599595   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:39.599514   63337 retry.go:31] will retry after 862.586366ms: waiting for machine to come up
	I0421 19:59:40.463709   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:40.464277   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:40.464311   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:40.464216   63337 retry.go:31] will retry after 1.446407672s: waiting for machine to come up
	I0421 19:59:41.912470   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:41.912935   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:41.912967   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:41.912893   63337 retry.go:31] will retry after 1.78143514s: waiting for machine to come up
	I0421 19:59:43.695369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:43.695781   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:43.695818   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:43.695761   63337 retry.go:31] will retry after 1.850669352s: waiting for machine to come up
	I0421 19:59:45.547626   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:45.548119   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:45.548147   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:45.548063   63337 retry.go:31] will retry after 2.399567648s: waiting for machine to come up
	I0421 19:59:47.949884   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:47.950410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:47.950435   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:47.950371   63337 retry.go:31] will retry after 2.461886164s: waiting for machine to come up
	I0421 19:59:50.413594   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:50.414039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:50.414075   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:50.413995   63337 retry.go:31] will retry after 3.706995804s: waiting for machine to come up
	I0421 19:59:54.123715   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124155   62197 main.go:141] libmachine: (embed-certs-727235) Found IP for machine: 192.168.72.9
	I0421 19:59:54.124185   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has current primary IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124194   62197 main.go:141] libmachine: (embed-certs-727235) Reserving static IP address...
	I0421 19:59:54.124657   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.124687   62197 main.go:141] libmachine: (embed-certs-727235) Reserved static IP address: 192.168.72.9
	I0421 19:59:54.124708   62197 main.go:141] libmachine: (embed-certs-727235) DBG | skip adding static IP to network mk-embed-certs-727235 - found existing host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"}
	I0421 19:59:54.124723   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Getting to WaitForSSH function...
	I0421 19:59:54.124737   62197 main.go:141] libmachine: (embed-certs-727235) Waiting for SSH to be available...
	I0421 19:59:54.126889   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127295   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.127327   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH client type: external
	I0421 19:59:54.127437   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa (-rw-------)
	I0421 19:59:54.127483   62197 main.go:141] libmachine: (embed-certs-727235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:59:54.127502   62197 main.go:141] libmachine: (embed-certs-727235) DBG | About to run SSH command:
	I0421 19:59:54.127521   62197 main.go:141] libmachine: (embed-certs-727235) DBG | exit 0
	I0421 19:59:54.254733   62197 main.go:141] libmachine: (embed-certs-727235) DBG | SSH cmd err, output: <nil>: 
	I0421 19:59:54.255110   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetConfigRaw
	I0421 19:59:54.255772   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.258448   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.258834   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.258858   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.259128   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:59:54.259326   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:59:54.259348   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:54.259572   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.262235   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262666   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.262695   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262773   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.262946   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263307   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.263484   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.263693   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.263712   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:59:54.379098   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:59:54.379135   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379445   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:54.379477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379680   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.382614   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383064   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.383095   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383211   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.383422   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383585   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383748   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.383896   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.384121   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.384147   62197 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-727235 && echo "embed-certs-727235" | sudo tee /etc/hostname
	I0421 19:59:54.511915   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-727235
	
	I0421 19:59:54.511944   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.515093   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515475   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.515508   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515663   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.515865   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516024   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.516275   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.516436   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.516452   62197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-727235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-727235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-727235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:59:54.638386   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:54.638426   62197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:59:54.638450   62197 buildroot.go:174] setting up certificates
	I0421 19:59:54.638460   62197 provision.go:84] configureAuth start
	I0421 19:59:54.638468   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.638764   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.641718   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.642084   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642297   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.644790   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645154   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.645182   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645300   62197 provision.go:143] copyHostCerts
	I0421 19:59:54.645353   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:59:54.645363   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:59:54.645423   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:59:54.645506   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:59:54.645514   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:59:54.645535   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:59:54.645587   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:59:54.645594   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:59:54.645613   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:59:54.645658   62197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.embed-certs-727235 san=[127.0.0.1 192.168.72.9 embed-certs-727235 localhost minikube]
	I0421 19:59:54.847892   62197 provision.go:177] copyRemoteCerts
	I0421 19:59:54.847950   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:59:54.847974   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.850561   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.850885   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.850916   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.851070   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.851261   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.851408   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.851542   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:54.939705   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0421 19:59:54.969564   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:59:54.996643   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:59:55.023261   62197 provision.go:87] duration metric: took 384.790427ms to configureAuth
	I0421 19:59:55.023285   62197 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:59:55.023469   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:59:55.023553   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.026429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026817   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.026851   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026984   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.027176   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027309   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.027605   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.027807   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.027831   62197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:59:55.329921   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:59:55.329950   62197 machine.go:97] duration metric: took 1.070609599s to provisionDockerMachine
	I0421 19:59:55.329967   62197 start.go:293] postStartSetup for "embed-certs-727235" (driver="kvm2")
	I0421 19:59:55.329986   62197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:59:55.330007   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.330422   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:59:55.330455   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.333062   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.333463   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333642   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.333820   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.333973   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.334132   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.422655   62197 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:59:55.428020   62197 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:59:55.428049   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:59:55.428128   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:59:55.428222   62197 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:59:55.428344   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:59:55.439964   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:59:55.469927   62197 start.go:296] duration metric: took 139.939886ms for postStartSetup
	I0421 19:59:55.469977   62197 fix.go:56] duration metric: took 20.134086048s for fixHost
	I0421 19:59:55.469997   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.472590   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.472954   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.472986   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.473194   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.473438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473616   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473813   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.473993   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.474209   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.474220   62197 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:59:55.583326   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713729595.559945159
	
	I0421 19:59:55.583347   62197 fix.go:216] guest clock: 1713729595.559945159
	I0421 19:59:55.583358   62197 fix.go:229] Guest: 2024-04-21 19:59:55.559945159 +0000 UTC Remote: 2024-04-21 19:59:55.469982444 +0000 UTC m=+302.687162567 (delta=89.962715ms)
	I0421 19:59:55.583413   62197 fix.go:200] guest clock delta is within tolerance: 89.962715ms
	I0421 19:59:55.583420   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 20.24754889s
	I0421 19:59:55.583466   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.583763   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:55.586342   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586700   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.586726   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586824   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587277   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587559   62197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:59:55.587601   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.587683   62197 ssh_runner.go:195] Run: cat /version.json
	I0421 19:59:55.587721   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.590094   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590379   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590476   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590505   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590641   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590721   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590747   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590817   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.590888   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590972   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591052   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.591128   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.591172   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591276   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.676275   62197 ssh_runner.go:195] Run: systemctl --version
	I0421 19:59:55.700845   62197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:59:55.849591   62197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:59:55.856384   62197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:59:55.856444   62197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:59:55.875575   62197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:59:55.875602   62197 start.go:494] detecting cgroup driver to use...
	I0421 19:59:55.875686   62197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:59:55.892497   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:59:55.907596   62197 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:59:55.907660   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:59:55.922805   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:59:55.938117   62197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:59:56.064198   62197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:59:56.239132   62197 docker.go:233] disabling docker service ...
	I0421 19:59:56.239210   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:59:56.256188   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:59:56.271951   62197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:59:56.409651   62197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:59:56.545020   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:59:56.560474   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:59:56.581091   62197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 19:59:56.581170   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.591783   62197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:59:56.591853   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.602656   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.613491   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.624452   62197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:59:56.635277   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.646299   62197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.665973   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.677014   62197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:59:56.687289   62197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:59:56.687340   62197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:59:56.702507   62197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:59:56.723008   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:59:56.879595   62197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:59:57.034078   62197 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:59:57.034150   62197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:59:57.039565   62197 start.go:562] Will wait 60s for crictl version
	I0421 19:59:57.039621   62197 ssh_runner.go:195] Run: which crictl
	I0421 19:59:57.044006   62197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:59:57.089252   62197 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:59:57.089340   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.121283   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.160334   62197 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 19:59:57.161976   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:57.164827   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165288   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:57.165321   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165536   62197 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0421 19:59:57.170481   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:59:57.185488   62197 kubeadm.go:877] updating cluster {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-
727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:59:57.185682   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:59:57.185736   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:59:57.237246   62197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 19:59:57.237303   62197 ssh_runner.go:195] Run: which lz4
	I0421 19:59:57.241760   62197 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 19:59:57.246777   62197 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:59:57.246817   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 19:59:58.900652   62197 crio.go:462] duration metric: took 1.658935699s to copy over tarball
	I0421 19:59:58.900742   62197 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:00:01.517236   62197 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.616462501s)
	I0421 20:00:01.517268   62197 crio.go:469] duration metric: took 2.616589126s to extract the tarball
	I0421 20:00:01.517279   62197 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:00:01.560109   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:00:01.610448   62197 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:00:01.610476   62197 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:00:01.610484   62197 kubeadm.go:928] updating node { 192.168.72.9 8443 v1.30.0 crio true true} ...
	I0421 20:00:01.610605   62197 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-727235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:00:01.610711   62197 ssh_runner.go:195] Run: crio config
	I0421 20:00:01.670151   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:01.670176   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:01.670188   62197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:00:01.670210   62197 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.9 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-727235 NodeName:embed-certs-727235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:00:01.670391   62197 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-727235"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:00:01.670479   62197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:00:01.683795   62197 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:00:01.683876   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:00:01.696350   62197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0421 20:00:01.717795   62197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:00:01.739491   62197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0421 20:00:01.761288   62197 ssh_runner.go:195] Run: grep 192.168.72.9	control-plane.minikube.internal$ /etc/hosts
	I0421 20:00:01.766285   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:00:01.781727   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:00:01.913030   62197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:00:01.934347   62197 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235 for IP: 192.168.72.9
	I0421 20:00:01.934375   62197 certs.go:194] generating shared ca certs ...
	I0421 20:00:01.934395   62197 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:00:01.934541   62197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:00:01.934615   62197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:00:01.934630   62197 certs.go:256] generating profile certs ...
	I0421 20:00:01.934729   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/client.key
	I0421 20:00:01.934796   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key.2840921d
	I0421 20:00:01.934854   62197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key
	I0421 20:00:01.934994   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:00:01.935032   62197 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:00:01.935045   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:00:01.935078   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:00:01.935110   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:00:01.935141   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:00:01.935197   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:00:01.936087   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:00:01.967117   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:00:02.003800   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:00:02.048029   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:00:02.089245   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0421 20:00:02.125745   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:00:02.163109   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:00:02.196506   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:00:02.229323   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:00:02.260648   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:00:02.290829   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:00:02.322222   62197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:00:02.344701   62197 ssh_runner.go:195] Run: openssl version
	I0421 20:00:02.352355   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:00:02.366812   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372857   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372947   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.380616   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:00:02.395933   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:00:02.411591   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418090   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418172   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.425721   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:00:02.443203   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:00:02.458442   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464317   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464386   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.471351   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:00:02.484925   62197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:00:02.491028   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 20:00:02.498970   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 20:00:02.506460   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 20:00:02.514257   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 20:00:02.521253   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 20:00:02.528828   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 20:00:02.537353   62197 kubeadm.go:391] StartCluster: {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727
235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:00:02.537443   62197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:00:02.537495   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.587891   62197 cri.go:89] found id: ""
	I0421 20:00:02.587996   62197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0421 20:00:02.601571   62197 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 20:00:02.601600   62197 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 20:00:02.601606   62197 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 20:00:02.601658   62197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 20:00:02.616596   62197 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:00:02.617728   62197 kubeconfig.go:125] found "embed-certs-727235" server: "https://192.168.72.9:8443"
	I0421 20:00:02.619968   62197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:00:02.634565   62197 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.9
	I0421 20:00:02.634618   62197 kubeadm.go:1154] stopping kube-system containers ...
	I0421 20:00:02.634633   62197 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0421 20:00:02.634699   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.685251   62197 cri.go:89] found id: ""
	I0421 20:00:02.685329   62197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 20:00:02.707720   62197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:00:02.722037   62197 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:00:02.722082   62197 kubeadm.go:156] found existing configuration files:
	
	I0421 20:00:02.722140   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:00:02.735544   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:00:02.735610   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:00:02.748027   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:00:02.759766   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:00:02.759841   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:00:02.773350   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.787463   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:00:02.787519   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.802575   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:00:02.816988   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:00:02.817045   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:00:02.830215   62197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:00:02.843407   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:03.501684   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.207411   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.448982   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.525835   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.656875   62197 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:00:04.656964   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.157388   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.657897   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.717895   62197 api_server.go:72] duration metric: took 1.061019387s to wait for apiserver process to appear ...
	I0421 20:00:05.717929   62197 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:00:05.717953   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:05.718558   62197 api_server.go:269] stopped: https://192.168.72.9:8443/healthz: Get "https://192.168.72.9:8443/healthz": dial tcp 192.168.72.9:8443: connect: connection refused
	I0421 20:00:06.218281   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.703744   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.703789   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.703806   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.722219   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.722249   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.722265   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.733030   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.733061   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:09.218765   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.224083   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.224115   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:09.718435   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.726603   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.726629   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:10.218162   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:10.224240   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 200:
	ok
	I0421 20:00:10.235750   62197 api_server.go:141] control plane version: v1.30.0
	I0421 20:00:10.235778   62197 api_server.go:131] duration metric: took 4.517842889s to wait for apiserver health ...
	I0421 20:00:10.235787   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:10.235793   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:10.237625   62197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:00:10.239279   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:00:10.262918   62197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:00:10.297402   62197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:00:10.310749   62197 system_pods.go:59] 8 kube-system pods found
	I0421 20:00:10.310805   62197 system_pods.go:61] "coredns-7db6d8ff4d-52bft" [85facf66-ffda-447c-8a04-ac95ac842470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0421 20:00:10.310818   62197 system_pods.go:61] "etcd-embed-certs-727235" [e7031073-0e50-431e-ab67-eda1fa4b9f18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0421 20:00:10.310833   62197 system_pods.go:61] "kube-apiserver-embed-certs-727235" [28be3882-5790-4754-9ef6-ec8f71367757] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0421 20:00:10.310847   62197 system_pods.go:61] "kube-controller-manager-embed-certs-727235" [83da56c1-3479-47f0-936f-ef9d0e4f455d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0421 20:00:10.310854   62197 system_pods.go:61] "kube-proxy-djqh8" [307fa1e9-345f-49b9-85e5-7b20b3275b45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0421 20:00:10.310865   62197 system_pods.go:61] "kube-scheduler-embed-certs-727235" [096371b2-a9b9-4867-a7da-b540432a973b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0421 20:00:10.310884   62197 system_pods.go:61] "metrics-server-569cc877fc-959cd" [146c80ec-6ae0-4ba3-b4be-df99fbf010a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:00:10.310901   62197 system_pods.go:61] "storage-provisioner" [054513d7-51f3-40eb-b875-b73d16c7405b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0421 20:00:10.310913   62197 system_pods.go:74] duration metric: took 13.478482ms to wait for pod list to return data ...
	I0421 20:00:10.310928   62197 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:00:10.315131   62197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:00:10.315170   62197 node_conditions.go:123] node cpu capacity is 2
	I0421 20:00:10.315187   62197 node_conditions.go:105] duration metric: took 4.252168ms to run NodePressure ...
	I0421 20:00:10.315210   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:10.620925   62197 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628865   62197 kubeadm.go:733] kubelet initialised
	I0421 20:00:10.628891   62197 kubeadm.go:734] duration metric: took 7.942591ms waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628899   62197 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:00:10.635290   62197 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:12.642618   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:14.648309   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:16.143559   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:16.143590   62197 pod_ready.go:81] duration metric: took 5.508275049s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:16.143602   62197 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:18.151189   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:20.152541   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.153814   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.649883   62197 pod_ready.go:92] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.649903   62197 pod_ready.go:81] duration metric: took 6.506293522s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.649912   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655444   62197 pod_ready.go:92] pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.655460   62197 pod_ready.go:81] duration metric: took 5.541421ms for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655468   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660078   62197 pod_ready.go:92] pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.660094   62197 pod_ready.go:81] duration metric: took 4.62017ms for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660102   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664789   62197 pod_ready.go:92] pod "kube-proxy-djqh8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.664808   62197 pod_ready.go:81] duration metric: took 4.700876ms for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664816   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668836   62197 pod_ready.go:92] pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.668851   62197 pod_ready.go:81] duration metric: took 4.029823ms for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668858   62197 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:24.676797   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:26.678669   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:29.175261   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:31.176580   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:33.677232   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:36.176401   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:38.678477   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:40.679096   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:43.178439   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:45.675906   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:47.676304   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:49.678715   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:52.176666   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:54.177353   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:56.677078   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:58.680937   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:01.175866   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:03.177322   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:05.676551   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:08.176504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:10.675324   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:12.679609   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:15.177636   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:17.177938   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:19.676849   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:21.677530   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:23.679352   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:26.176177   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:28.676123   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:30.677770   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:33.176672   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:35.675473   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:37.676094   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:40.177351   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:42.675765   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:44.677504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:47.178728   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:49.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:51.676977   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:53.677967   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:56.177161   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:58.675893   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:00.676490   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:03.175994   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:05.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:08.176147   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:10.676394   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:13.176425   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:15.178380   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:17.677109   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:20.174895   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:22.176464   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:24.177654   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:26.675586   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:28.676639   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:31.176664   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:33.677030   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:36.176792   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:38.176920   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:40.180665   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:42.678395   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:45.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:47.675740   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:49.676127   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:52.179886   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:54.675602   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:56.677577   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:58.681540   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:01.179494   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:03.676002   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:06.178560   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:08.676363   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:11.176044   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:13.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:15.676011   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:17.678133   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:20.177064   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:22.676179   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:25.176206   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:27.176706   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:29.177019   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:31.677239   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:33.679396   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:36.176193   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:38.176619   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:40.676129   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:42.677052   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:44.679521   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:47.175636   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:49.176114   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:51.676482   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:54.176228   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:56.675340   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:58.676581   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:01.175469   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:03.675918   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:05.677443   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:08.175700   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:10.175971   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:12.176364   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:14.675544   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:16.677069   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:19.178329   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:21.677217   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:22.669233   62197 pod_ready.go:81] duration metric: took 4m0.000357215s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" ...
	E0421 20:04:22.669279   62197 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0421 20:04:22.669298   62197 pod_ready.go:38] duration metric: took 4m12.040390946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:04:22.669328   62197 kubeadm.go:591] duration metric: took 4m20.067715018s to restartPrimaryControlPlane
	W0421 20:04:22.669388   62197 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0421 20:04:22.669420   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 20:04:55.622547   62197 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.953103457s)
	I0421 20:04:55.622619   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:04:55.642562   62197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:04:55.656647   62197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:04:55.669601   62197 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:04:55.669634   62197 kubeadm.go:156] found existing configuration files:
	
	I0421 20:04:55.669698   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:04:55.681786   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:04:55.681877   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:04:55.693186   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:04:55.704426   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:04:55.704498   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:04:55.715698   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:04:55.726902   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:04:55.726963   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:04:55.737702   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:04:55.747525   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:04:55.747578   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:04:55.758189   62197 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:04:55.822641   62197 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:04:55.822744   62197 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:04:55.980743   62197 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:04:55.980861   62197 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:04:55.980970   62197 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:04:56.253377   62197 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:04:56.255499   62197 out.go:204]   - Generating certificates and keys ...
	I0421 20:04:56.255617   62197 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:04:56.255700   62197 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:04:56.255804   62197 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 20:04:56.255884   62197 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 20:04:56.256006   62197 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 20:04:56.256106   62197 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 20:04:56.256207   62197 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 20:04:56.256308   62197 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 20:04:56.256402   62197 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 20:04:56.256509   62197 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 20:04:56.256566   62197 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 20:04:56.256644   62197 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:04:56.437649   62197 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:04:56.650553   62197 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:04:57.060706   62197 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:04:57.174098   62197 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:04:57.367997   62197 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:04:57.368680   62197 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:04:57.371654   62197 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:04:57.373516   62197 out.go:204]   - Booting up control plane ...
	I0421 20:04:57.373653   62197 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:04:57.373917   62197 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:04:57.375239   62197 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:04:57.398413   62197 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:04:57.399558   62197 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:04:57.399617   62197 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:04:57.553539   62197 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:04:57.553623   62197 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:04:58.054844   62197 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.816521ms
	I0421 20:04:58.054972   62197 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:05:03.560432   62197 kubeadm.go:309] [api-check] The API server is healthy after 5.502858901s
	I0421 20:05:03.586877   62197 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:05:03.612249   62197 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:05:03.657011   62197 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:05:03.657292   62197 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-727235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:05:03.681951   62197 kubeadm.go:309] [bootstrap-token] Using token: qlvjzn.lyyunat9omiyo08d
	I0421 20:05:03.683979   62197 out.go:204]   - Configuring RBAC rules ...
	I0421 20:05:03.684163   62197 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:05:03.692087   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:05:03.708154   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:05:03.719186   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:05:03.725682   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:05:03.743859   62197 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:05:03.966200   62197 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:05:04.418727   62197 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:05:04.965852   62197 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:05:04.967125   62197 kubeadm.go:309] 
	I0421 20:05:04.967218   62197 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:05:04.967234   62197 kubeadm.go:309] 
	I0421 20:05:04.967347   62197 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:05:04.967364   62197 kubeadm.go:309] 
	I0421 20:05:04.967386   62197 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:05:04.967457   62197 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:05:04.967526   62197 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:05:04.967536   62197 kubeadm.go:309] 
	I0421 20:05:04.967627   62197 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:05:04.967645   62197 kubeadm.go:309] 
	I0421 20:05:04.967719   62197 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:05:04.967737   62197 kubeadm.go:309] 
	I0421 20:05:04.967795   62197 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:05:04.967943   62197 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:05:04.968057   62197 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:05:04.968065   62197 kubeadm.go:309] 
	I0421 20:05:04.968137   62197 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:05:04.968219   62197 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:05:04.968226   62197 kubeadm.go:309] 
	I0421 20:05:04.968342   62197 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qlvjzn.lyyunat9omiyo08d \
	I0421 20:05:04.968485   62197 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 20:05:04.968517   62197 kubeadm.go:309] 	--control-plane 
	I0421 20:05:04.968526   62197 kubeadm.go:309] 
	I0421 20:05:04.968613   62197 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:05:04.968626   62197 kubeadm.go:309] 
	I0421 20:05:04.968729   62197 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qlvjzn.lyyunat9omiyo08d \
	I0421 20:05:04.968880   62197 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 20:05:04.969331   62197 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:05:04.969624   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:05:04.969641   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:05:04.971771   62197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:05:04.973341   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:05:04.987129   62197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:05:05.011637   62197 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:05:05.011711   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:05.011764   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-727235 minikube.k8s.io/updated_at=2024_04_21T20_05_05_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=embed-certs-727235 minikube.k8s.io/primary=true
	I0421 20:05:05.067233   62197 ops.go:34] apiserver oom_adj: -16
	I0421 20:05:05.238528   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:05.739469   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:06.238758   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:06.738799   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:07.239324   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:07.738768   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:08.239309   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:08.738788   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:09.239302   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:09.739436   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:10.239021   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:10.738776   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:11.239306   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:11.738999   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:12.238807   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:12.739328   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:13.239138   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:13.739202   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:14.238984   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:14.739315   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:15.239116   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:15.739002   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:16.239284   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:16.738885   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:17.238968   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:17.739159   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:17.887030   62197 kubeadm.go:1107] duration metric: took 12.875377625s to wait for elevateKubeSystemPrivileges
	W0421 20:05:17.887075   62197 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:05:17.887084   62197 kubeadm.go:393] duration metric: took 5m15.349737892s to StartCluster
	I0421 20:05:17.887105   62197 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:17.887211   62197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:05:17.889418   62197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:17.889699   62197 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:05:17.890940   62197 out.go:177] * Verifying Kubernetes components...
	I0421 20:05:17.889812   62197 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:05:17.889876   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:05:17.892135   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:17.892135   62197 addons.go:69] Setting default-storageclass=true in profile "embed-certs-727235"
	I0421 20:05:17.892262   62197 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-727235"
	I0421 20:05:17.892135   62197 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-727235"
	I0421 20:05:17.892349   62197 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-727235"
	W0421 20:05:17.892368   62197 addons.go:243] addon storage-provisioner should already be in state true
	I0421 20:05:17.892148   62197 addons.go:69] Setting metrics-server=true in profile "embed-certs-727235"
	I0421 20:05:17.892415   62197 addons.go:234] Setting addon metrics-server=true in "embed-certs-727235"
	W0421 20:05:17.892427   62197 addons.go:243] addon metrics-server should already be in state true
	I0421 20:05:17.892448   62197 host.go:66] Checking if "embed-certs-727235" exists ...
	I0421 20:05:17.892454   62197 host.go:66] Checking if "embed-certs-727235" exists ...
	I0421 20:05:17.892696   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.892732   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.892872   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.892894   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.892874   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.893004   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.912112   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0421 20:05:17.912149   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0421 20:05:17.912154   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I0421 20:05:17.912728   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.912823   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.912836   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.913268   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.913288   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.913395   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.913416   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.913576   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.913597   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.913859   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.913868   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.913926   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.914044   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.914443   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.914455   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.914494   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.914554   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.918634   62197 addons.go:234] Setting addon default-storageclass=true in "embed-certs-727235"
	W0421 20:05:17.918658   62197 addons.go:243] addon default-storageclass should already be in state true
	I0421 20:05:17.918690   62197 host.go:66] Checking if "embed-certs-727235" exists ...
	I0421 20:05:17.919046   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.919091   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.934397   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0421 20:05:17.934457   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0421 20:05:17.934844   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.935364   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.935384   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.935717   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.935902   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.936450   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I0421 20:05:17.937200   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.937722   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.937740   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.937806   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.938193   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 20:05:17.938262   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.940253   62197 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:05:17.938565   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.938904   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.941894   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.942116   62197 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:05:17.942127   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:05:17.942140   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 20:05:17.943273   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.943971   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.943997   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.945417   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.945825   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 20:05:17.945844   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.946146   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 20:05:17.946324   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 20:05:17.946545   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 20:05:17.946721   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 20:05:17.947089   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 20:05:17.949422   62197 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0421 20:05:17.950901   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 20:05:17.950918   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 20:05:17.950936   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 20:05:17.954912   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.955319   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 20:05:17.955339   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.955524   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 20:05:17.955671   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 20:05:17.955778   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 20:05:17.955891   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 20:05:17.964056   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0421 20:05:17.964584   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.965120   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.965154   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.965532   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.965763   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.967498   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 20:05:17.967755   62197 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:05:17.967774   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:05:17.967796   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 20:05:17.970713   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.971145   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 20:05:17.971197   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.971310   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 20:05:17.971561   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 20:05:17.971902   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 20:05:17.972048   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 20:05:18.138650   62197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:05:18.183377   62197 node_ready.go:35] waiting up to 6m0s for node "embed-certs-727235" to be "Ready" ...
	I0421 20:05:18.193012   62197 node_ready.go:49] node "embed-certs-727235" has status "Ready":"True"
	I0421 20:05:18.193041   62197 node_ready.go:38] duration metric: took 9.629767ms for node "embed-certs-727235" to be "Ready" ...
	I0421 20:05:18.193054   62197 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:05:18.204041   62197 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:18.419415   62197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:05:18.447355   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 20:05:18.447380   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0421 20:05:18.453179   62197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:05:18.567668   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 20:05:18.567702   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 20:05:18.626134   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 20:05:18.626159   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 20:05:18.735391   62197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 20:05:19.815807   62197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.362600114s)
	I0421 20:05:19.815863   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.815874   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816010   62197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.396559617s)
	I0421 20:05:19.816059   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.816075   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816198   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.816229   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.816246   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.816255   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.816263   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816336   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.816390   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.816411   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.816425   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.816436   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816578   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.816487   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.816865   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.818141   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.818156   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.818178   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.862592   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.862620   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.862896   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.862911   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:20.057104   62197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321660879s)
	I0421 20:05:20.057167   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:20.057184   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:20.057475   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:20.057513   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:20.057530   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:20.057543   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:20.057554   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:20.057789   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:20.057834   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:20.057850   62197 addons.go:470] Verifying addon metrics-server=true in "embed-certs-727235"
	I0421 20:05:20.059852   62197 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0421 20:05:20.061799   62197 addons.go:505] duration metric: took 2.171989077s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0421 20:05:20.211929   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace has status "Ready":"False"
	I0421 20:05:20.716853   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.716883   62197 pod_ready.go:81] duration metric: took 2.512810672s for pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.716897   62197 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mjgjp" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.729538   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-mjgjp" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.729562   62197 pod_ready.go:81] duration metric: took 12.656265ms for pod "coredns-7db6d8ff4d-mjgjp" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.729574   62197 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.734922   62197 pod_ready.go:92] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.734945   62197 pod_ready.go:81] duration metric: took 5.363976ms for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.734957   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.744017   62197 pod_ready.go:92] pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.744042   62197 pod_ready.go:81] duration metric: took 9.077653ms for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.744052   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.756573   62197 pod_ready.go:92] pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.756596   62197 pod_ready.go:81] duration metric: took 12.536659ms for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.756609   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zh4fs" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.109950   62197 pod_ready.go:92] pod "kube-proxy-zh4fs" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:21.109979   62197 pod_ready.go:81] duration metric: took 353.361994ms for pod "kube-proxy-zh4fs" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.109994   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.511561   62197 pod_ready.go:92] pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:21.511585   62197 pod_ready.go:81] duration metric: took 401.583353ms for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.511593   62197 pod_ready.go:38] duration metric: took 3.3185271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:05:21.511607   62197 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:05:21.511654   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:05:21.529942   62197 api_server.go:72] duration metric: took 3.640186145s to wait for apiserver process to appear ...
	I0421 20:05:21.529968   62197 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:05:21.529989   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:05:21.534887   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 200:
	ok
	I0421 20:05:21.535839   62197 api_server.go:141] control plane version: v1.30.0
	I0421 20:05:21.535863   62197 api_server.go:131] duration metric: took 5.887688ms to wait for apiserver health ...
	I0421 20:05:21.535873   62197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:05:21.713348   62197 system_pods.go:59] 9 kube-system pods found
	I0421 20:05:21.713377   62197 system_pods.go:61] "coredns-7db6d8ff4d-b7p8r" [46baeec2-c553-460c-b19a-62c20d04eb00] Running
	I0421 20:05:21.713382   62197 system_pods.go:61] "coredns-7db6d8ff4d-mjgjp" [3d879b9e-8ab5-4ae6-9677-024c7172f9aa] Running
	I0421 20:05:21.713386   62197 system_pods.go:61] "etcd-embed-certs-727235" [105543da-d105-416a-aa27-09cfbd574d1c] Running
	I0421 20:05:21.713389   62197 system_pods.go:61] "kube-apiserver-embed-certs-727235" [bd07efe0-d573-483a-8ea8-7faa6277d53b] Running
	I0421 20:05:21.713393   62197 system_pods.go:61] "kube-controller-manager-embed-certs-727235" [aec17b3e-990e-4ca0-b6bd-1693eba6cb53] Running
	I0421 20:05:21.713396   62197 system_pods.go:61] "kube-proxy-zh4fs" [0b4342b3-19be-43ce-9a60-27dfab04af45] Running
	I0421 20:05:21.713398   62197 system_pods.go:61] "kube-scheduler-embed-certs-727235" [af8aff7d-caf3-46bd-9a73-08c37baeb355] Running
	I0421 20:05:21.713404   62197 system_pods.go:61] "metrics-server-569cc877fc-2vwhn" [4cb94623-a7b9-41e3-a6bc-fcc8b2856365] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:05:21.713408   62197 system_pods.go:61] "storage-provisioner" [63784fb4-2205-4b24-94c8-b11015c21ed6] Running
	I0421 20:05:21.713415   62197 system_pods.go:74] duration metric: took 177.536941ms to wait for pod list to return data ...
	I0421 20:05:21.713422   62197 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:05:21.917809   62197 default_sa.go:45] found service account: "default"
	I0421 20:05:21.917837   62197 default_sa.go:55] duration metric: took 204.409737ms for default service account to be created ...
	I0421 20:05:21.917847   62197 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:05:22.119019   62197 system_pods.go:86] 9 kube-system pods found
	I0421 20:05:22.119051   62197 system_pods.go:89] "coredns-7db6d8ff4d-b7p8r" [46baeec2-c553-460c-b19a-62c20d04eb00] Running
	I0421 20:05:22.119061   62197 system_pods.go:89] "coredns-7db6d8ff4d-mjgjp" [3d879b9e-8ab5-4ae6-9677-024c7172f9aa] Running
	I0421 20:05:22.119066   62197 system_pods.go:89] "etcd-embed-certs-727235" [105543da-d105-416a-aa27-09cfbd574d1c] Running
	I0421 20:05:22.119073   62197 system_pods.go:89] "kube-apiserver-embed-certs-727235" [bd07efe0-d573-483a-8ea8-7faa6277d53b] Running
	I0421 20:05:22.119079   62197 system_pods.go:89] "kube-controller-manager-embed-certs-727235" [aec17b3e-990e-4ca0-b6bd-1693eba6cb53] Running
	I0421 20:05:22.119084   62197 system_pods.go:89] "kube-proxy-zh4fs" [0b4342b3-19be-43ce-9a60-27dfab04af45] Running
	I0421 20:05:22.119090   62197 system_pods.go:89] "kube-scheduler-embed-certs-727235" [af8aff7d-caf3-46bd-9a73-08c37baeb355] Running
	I0421 20:05:22.119101   62197 system_pods.go:89] "metrics-server-569cc877fc-2vwhn" [4cb94623-a7b9-41e3-a6bc-fcc8b2856365] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:05:22.119108   62197 system_pods.go:89] "storage-provisioner" [63784fb4-2205-4b24-94c8-b11015c21ed6] Running
	I0421 20:05:22.119121   62197 system_pods.go:126] duration metric: took 201.26806ms to wait for k8s-apps to be running ...
	I0421 20:05:22.119130   62197 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:05:22.119178   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:05:22.136535   62197 system_svc.go:56] duration metric: took 17.395833ms WaitForService to wait for kubelet
	I0421 20:05:22.136569   62197 kubeadm.go:576] duration metric: took 4.246830881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:05:22.136600   62197 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:05:22.311566   62197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:05:22.311592   62197 node_conditions.go:123] node cpu capacity is 2
	I0421 20:05:22.311603   62197 node_conditions.go:105] duration metric: took 174.998456ms to run NodePressure ...
	I0421 20:05:22.311612   62197 start.go:240] waiting for startup goroutines ...
	I0421 20:05:22.311618   62197 start.go:245] waiting for cluster config update ...
	I0421 20:05:22.311628   62197 start.go:254] writing updated cluster config ...
	I0421 20:05:22.311880   62197 ssh_runner.go:195] Run: rm -f paused
	I0421 20:05:22.360230   62197 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:05:22.362475   62197 out.go:177] * Done! kubectl is now configured to use "embed-certs-727235" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.803673310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729989803640017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8b5b712-5f53-4216-b905-6dbb167a71c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.804267432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbf862e6-4806-43ea-9740-d522deb838a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.804362779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbf862e6-4806-43ea-9740-d522deb838a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.804464640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bbf862e6-4806-43ea-9740-d522deb838a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.841395818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45056722-4cfe-486a-9056-8ce87f95bb3f name=/runtime.v1.RuntimeService/Version
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.841498318Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45056722-4cfe-486a-9056-8ce87f95bb3f name=/runtime.v1.RuntimeService/Version
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.842665388Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbd8806a-a533-4097-9fdb-810bddde203c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.843214521Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729989843188200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbd8806a-a533-4097-9fdb-810bddde203c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.843719484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=173e332d-a24b-412a-a973-e238aef5eb00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.843797763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=173e332d-a24b-412a-a973-e238aef5eb00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.843830237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=173e332d-a24b-412a-a973-e238aef5eb00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.880984080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b0c2d6e-2784-40c1-a657-466a8ef8fe30 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.881086869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b0c2d6e-2784-40c1-a657-466a8ef8fe30 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.882568813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=485f70ca-c4e3-483f-b1d8-5845d756e5cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.882946693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729989882926628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=485f70ca-c4e3-483f-b1d8-5845d756e5cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.883508475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c108a30-456d-4e80-b4b1-318832fd7f02 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.883601863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c108a30-456d-4e80-b4b1-318832fd7f02 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.883654944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8c108a30-456d-4e80-b4b1-318832fd7f02 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.932986632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8463eaa-0602-4f40-be90-85f676fa15ba name=/runtime.v1.RuntimeService/Version
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.933061358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8463eaa-0602-4f40-be90-85f676fa15ba name=/runtime.v1.RuntimeService/Version
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.934210596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5f24899-fc8e-4a05-af24-d1e840c6d1a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.934644935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713729989934620870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5f24899-fc8e-4a05-af24-d1e840c6d1a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.935206870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f08df926-fff6-414c-89de-a0e6b816ab39 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.935309050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f08df926-fff6-414c-89de-a0e6b816ab39 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:06:29 old-k8s-version-867585 crio[653]: time="2024-04-21 20:06:29.935338361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f08df926-fff6-414c-89de-a0e6b816ab39 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr21 19:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052533] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043842] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr21 19:49] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.559572] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.706661] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653397] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.066823] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075953] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.180284] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.150867] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.317680] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +7.956391] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.073092] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.574533] systemd-fstab-generator[968]: Ignoring "noauto" option for root device
	[ +11.346099] kauditd_printk_skb: 46 callbacks suppressed
	[Apr21 19:53] systemd-fstab-generator[4927]: Ignoring "noauto" option for root device
	[Apr21 19:55] systemd-fstab-generator[5208]: Ignoring "noauto" option for root device
	[  +0.069004] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:06:30 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-867585 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc0005e1b00)
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: goroutine 165 [select]:
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b0def0, 0x4f0ac20, 0xc0000505a0, 0x1, 0xc0001000c0)
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c0e2a0, 0xc0001000c0)
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c3f760, 0xc0002db920)
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 21 20:06:29 old-k8s-version-867585 kubelet[6377]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 21 20:06:29 old-k8s-version-867585 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 21 20:06:29 old-k8s-version-867585 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 21 20:06:29 old-k8s-version-867585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Apr 21 20:06:29 old-k8s-version-867585 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 21 20:06:29 old-k8s-version-867585 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 21 20:06:30 old-k8s-version-867585 kubelet[6444]: I0421 20:06:30.037345    6444 server.go:416] Version: v1.20.0
	Apr 21 20:06:30 old-k8s-version-867585 kubelet[6444]: I0421 20:06:30.037807    6444 server.go:837] Client rotation is on, will bootstrap in background
	Apr 21 20:06:30 old-k8s-version-867585 kubelet[6444]: I0421 20:06:30.040793    6444 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 21 20:06:30 old-k8s-version-867585 kubelet[6444]: W0421 20:06:30.042064    6444 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 21 20:06:30 old-k8s-version-867585 kubelet[6444]: I0421 20:06:30.042666    6444 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (252.56955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-867585" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (338.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-597568 -n no-preload-597568
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-21 20:09:14.9959744 +0000 UTC m=+6463.520109853
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-597568 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-597568 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.822µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-597568 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-597568 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-597568 logs -n 25: (1.451603144s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC | 21 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-597568                  | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC | 21 Apr 24 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:47 UTC |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-364614             | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-364614                  | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-364614 image list                           | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-411651 | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | disable-driver-mounts-411651                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-727235            | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC | 21 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-727235                 | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC | 21 Apr 24 20:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 20:08 UTC | 21 Apr 24 20:08 UTC |
	| start   | -p auto-474762 --memory=3072                           | auto-474762                  | jenkins | v1.33.0 | 21 Apr 24 20:08 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 20:08:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 20:08:49.247745   65923 out.go:291] Setting OutFile to fd 1 ...
	I0421 20:08:49.248025   65923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:08:49.248036   65923 out.go:304] Setting ErrFile to fd 2...
	I0421 20:08:49.248040   65923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:08:49.248235   65923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 20:08:49.248825   65923 out.go:298] Setting JSON to false
	I0421 20:08:49.249768   65923 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6627,"bootTime":1713723502,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 20:08:49.249831   65923 start.go:139] virtualization: kvm guest
	I0421 20:08:49.252312   65923 out.go:177] * [auto-474762] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 20:08:49.253913   65923 notify.go:220] Checking for updates...
	I0421 20:08:49.255482   65923 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 20:08:49.257238   65923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 20:08:49.258623   65923 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:08:49.259988   65923 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:08:49.261433   65923 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 20:08:49.262880   65923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 20:08:49.265017   65923 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:08:49.265169   65923 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:08:49.265305   65923 config.go:182] Loaded profile config "no-preload-597568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:08:49.265434   65923 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 20:08:49.304895   65923 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 20:08:49.306515   65923 start.go:297] selected driver: kvm2
	I0421 20:08:49.306547   65923 start.go:901] validating driver "kvm2" against <nil>
	I0421 20:08:49.306564   65923 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 20:08:49.307452   65923 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:08:49.307544   65923 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 20:08:49.323847   65923 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 20:08:49.323905   65923 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 20:08:49.324116   65923 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:08:49.324190   65923 cni.go:84] Creating CNI manager for ""
	I0421 20:08:49.324208   65923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:08:49.324223   65923 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 20:08:49.324292   65923 start.go:340] cluster config:
	{Name:auto-474762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:08:49.324402   65923 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:08:49.326951   65923 out.go:177] * Starting "auto-474762" primary control-plane node in "auto-474762" cluster
	I0421 20:08:49.328339   65923 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:08:49.328385   65923 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 20:08:49.328397   65923 cache.go:56] Caching tarball of preloaded images
	I0421 20:08:49.328495   65923 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 20:08:49.328507   65923 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 20:08:49.328619   65923 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/config.json ...
	I0421 20:08:49.328644   65923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/config.json: {Name:mk73b2942b127e3138c0cdd4d3eda60a95aeb3fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:08:49.328806   65923 start.go:360] acquireMachinesLock for auto-474762: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:08:49.328858   65923 start.go:364] duration metric: took 33.717µs to acquireMachinesLock for "auto-474762"
	I0421 20:08:49.328878   65923 start.go:93] Provisioning new machine with config: &{Name:auto-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName
:auto-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:08:49.328960   65923 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 20:08:49.330735   65923 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0421 20:08:49.330955   65923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:08:49.331006   65923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:08:49.347373   65923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0421 20:08:49.347849   65923 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:08:49.348540   65923 main.go:141] libmachine: Using API Version  1
	I0421 20:08:49.348567   65923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:08:49.348981   65923 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:08:49.349256   65923 main.go:141] libmachine: (auto-474762) Calling .GetMachineName
	I0421 20:08:49.349464   65923 main.go:141] libmachine: (auto-474762) Calling .DriverName
	I0421 20:08:49.349699   65923 start.go:159] libmachine.API.Create for "auto-474762" (driver="kvm2")
	I0421 20:08:49.349740   65923 client.go:168] LocalClient.Create starting
	I0421 20:08:49.349783   65923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 20:08:49.349835   65923 main.go:141] libmachine: Decoding PEM data...
	I0421 20:08:49.349860   65923 main.go:141] libmachine: Parsing certificate...
	I0421 20:08:49.349960   65923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 20:08:49.349993   65923 main.go:141] libmachine: Decoding PEM data...
	I0421 20:08:49.350008   65923 main.go:141] libmachine: Parsing certificate...
	I0421 20:08:49.350029   65923 main.go:141] libmachine: Running pre-create checks...
	I0421 20:08:49.350043   65923 main.go:141] libmachine: (auto-474762) Calling .PreCreateCheck
	I0421 20:08:49.350532   65923 main.go:141] libmachine: (auto-474762) Calling .GetConfigRaw
	I0421 20:08:49.350959   65923 main.go:141] libmachine: Creating machine...
	I0421 20:08:49.350978   65923 main.go:141] libmachine: (auto-474762) Calling .Create
	I0421 20:08:49.351146   65923 main.go:141] libmachine: (auto-474762) Creating KVM machine...
	I0421 20:08:49.352852   65923 main.go:141] libmachine: (auto-474762) DBG | found existing default KVM network
	I0421 20:08:49.353990   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:49.353841   65946 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:48:fb} reservation:<nil>}
	I0421 20:08:49.355251   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:49.355083   65946 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6730}
	I0421 20:08:49.355291   65923 main.go:141] libmachine: (auto-474762) DBG | created network xml: 
	I0421 20:08:49.355308   65923 main.go:141] libmachine: (auto-474762) DBG | <network>
	I0421 20:08:49.355321   65923 main.go:141] libmachine: (auto-474762) DBG |   <name>mk-auto-474762</name>
	I0421 20:08:49.355328   65923 main.go:141] libmachine: (auto-474762) DBG |   <dns enable='no'/>
	I0421 20:08:49.355332   65923 main.go:141] libmachine: (auto-474762) DBG |   
	I0421 20:08:49.355339   65923 main.go:141] libmachine: (auto-474762) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0421 20:08:49.355343   65923 main.go:141] libmachine: (auto-474762) DBG |     <dhcp>
	I0421 20:08:49.355352   65923 main.go:141] libmachine: (auto-474762) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0421 20:08:49.355363   65923 main.go:141] libmachine: (auto-474762) DBG |     </dhcp>
	I0421 20:08:49.355373   65923 main.go:141] libmachine: (auto-474762) DBG |   </ip>
	I0421 20:08:49.355383   65923 main.go:141] libmachine: (auto-474762) DBG |   
	I0421 20:08:49.355395   65923 main.go:141] libmachine: (auto-474762) DBG | </network>
	I0421 20:08:49.355405   65923 main.go:141] libmachine: (auto-474762) DBG | 
	I0421 20:08:49.360815   65923 main.go:141] libmachine: (auto-474762) DBG | trying to create private KVM network mk-auto-474762 192.168.50.0/24...
	I0421 20:08:49.443000   65923 main.go:141] libmachine: (auto-474762) DBG | private KVM network mk-auto-474762 192.168.50.0/24 created
	I0421 20:08:49.443445   65923 main.go:141] libmachine: (auto-474762) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762 ...
	I0421 20:08:49.444902   65923 main.go:141] libmachine: (auto-474762) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 20:08:49.444964   65923 main.go:141] libmachine: (auto-474762) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 20:08:49.444985   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:49.444775   65946 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:08:49.686195   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:49.686010   65946 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762/id_rsa...
	I0421 20:08:49.898368   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:49.898241   65946 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762/auto-474762.rawdisk...
	I0421 20:08:49.898401   65923 main.go:141] libmachine: (auto-474762) DBG | Writing magic tar header
	I0421 20:08:49.898419   65923 main.go:141] libmachine: (auto-474762) DBG | Writing SSH key tar header
	I0421 20:08:49.898432   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:49.898354   65946 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762 ...
	I0421 20:08:49.898451   65923 main.go:141] libmachine: (auto-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762
	I0421 20:08:49.898478   65923 main.go:141] libmachine: (auto-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 20:08:49.898493   65923 main.go:141] libmachine: (auto-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:08:49.898503   65923 main.go:141] libmachine: (auto-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762 (perms=drwx------)
	I0421 20:08:49.898518   65923 main.go:141] libmachine: (auto-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 20:08:49.898529   65923 main.go:141] libmachine: (auto-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 20:08:49.898544   65923 main.go:141] libmachine: (auto-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 20:08:49.898562   65923 main.go:141] libmachine: (auto-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 20:08:49.898591   65923 main.go:141] libmachine: (auto-474762) DBG | Checking permissions on dir: /home/jenkins
	I0421 20:08:49.898604   65923 main.go:141] libmachine: (auto-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 20:08:49.898616   65923 main.go:141] libmachine: (auto-474762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 20:08:49.898630   65923 main.go:141] libmachine: (auto-474762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 20:08:49.898652   65923 main.go:141] libmachine: (auto-474762) Creating domain...
	I0421 20:08:49.898666   65923 main.go:141] libmachine: (auto-474762) DBG | Checking permissions on dir: /home
	I0421 20:08:49.898708   65923 main.go:141] libmachine: (auto-474762) DBG | Skipping /home - not owner
	I0421 20:08:49.899771   65923 main.go:141] libmachine: (auto-474762) define libvirt domain using xml: 
	I0421 20:08:49.899790   65923 main.go:141] libmachine: (auto-474762) <domain type='kvm'>
	I0421 20:08:49.899799   65923 main.go:141] libmachine: (auto-474762)   <name>auto-474762</name>
	I0421 20:08:49.899807   65923 main.go:141] libmachine: (auto-474762)   <memory unit='MiB'>3072</memory>
	I0421 20:08:49.899816   65923 main.go:141] libmachine: (auto-474762)   <vcpu>2</vcpu>
	I0421 20:08:49.899824   65923 main.go:141] libmachine: (auto-474762)   <features>
	I0421 20:08:49.899832   65923 main.go:141] libmachine: (auto-474762)     <acpi/>
	I0421 20:08:49.899838   65923 main.go:141] libmachine: (auto-474762)     <apic/>
	I0421 20:08:49.899853   65923 main.go:141] libmachine: (auto-474762)     <pae/>
	I0421 20:08:49.899860   65923 main.go:141] libmachine: (auto-474762)     
	I0421 20:08:49.899884   65923 main.go:141] libmachine: (auto-474762)   </features>
	I0421 20:08:49.899909   65923 main.go:141] libmachine: (auto-474762)   <cpu mode='host-passthrough'>
	I0421 20:08:49.899919   65923 main.go:141] libmachine: (auto-474762)   
	I0421 20:08:49.899931   65923 main.go:141] libmachine: (auto-474762)   </cpu>
	I0421 20:08:49.899941   65923 main.go:141] libmachine: (auto-474762)   <os>
	I0421 20:08:49.899951   65923 main.go:141] libmachine: (auto-474762)     <type>hvm</type>
	I0421 20:08:49.899959   65923 main.go:141] libmachine: (auto-474762)     <boot dev='cdrom'/>
	I0421 20:08:49.899970   65923 main.go:141] libmachine: (auto-474762)     <boot dev='hd'/>
	I0421 20:08:49.899988   65923 main.go:141] libmachine: (auto-474762)     <bootmenu enable='no'/>
	I0421 20:08:49.900001   65923 main.go:141] libmachine: (auto-474762)   </os>
	I0421 20:08:49.900013   65923 main.go:141] libmachine: (auto-474762)   <devices>
	I0421 20:08:49.900024   65923 main.go:141] libmachine: (auto-474762)     <disk type='file' device='cdrom'>
	I0421 20:08:49.900042   65923 main.go:141] libmachine: (auto-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762/boot2docker.iso'/>
	I0421 20:08:49.900054   65923 main.go:141] libmachine: (auto-474762)       <target dev='hdc' bus='scsi'/>
	I0421 20:08:49.900066   65923 main.go:141] libmachine: (auto-474762)       <readonly/>
	I0421 20:08:49.900077   65923 main.go:141] libmachine: (auto-474762)     </disk>
	I0421 20:08:49.900100   65923 main.go:141] libmachine: (auto-474762)     <disk type='file' device='disk'>
	I0421 20:08:49.900116   65923 main.go:141] libmachine: (auto-474762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 20:08:49.900129   65923 main.go:141] libmachine: (auto-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/auto-474762/auto-474762.rawdisk'/>
	I0421 20:08:49.900140   65923 main.go:141] libmachine: (auto-474762)       <target dev='hda' bus='virtio'/>
	I0421 20:08:49.900152   65923 main.go:141] libmachine: (auto-474762)     </disk>
	I0421 20:08:49.900163   65923 main.go:141] libmachine: (auto-474762)     <interface type='network'>
	I0421 20:08:49.900174   65923 main.go:141] libmachine: (auto-474762)       <source network='mk-auto-474762'/>
	I0421 20:08:49.900183   65923 main.go:141] libmachine: (auto-474762)       <model type='virtio'/>
	I0421 20:08:49.900205   65923 main.go:141] libmachine: (auto-474762)     </interface>
	I0421 20:08:49.900223   65923 main.go:141] libmachine: (auto-474762)     <interface type='network'>
	I0421 20:08:49.900232   65923 main.go:141] libmachine: (auto-474762)       <source network='default'/>
	I0421 20:08:49.900243   65923 main.go:141] libmachine: (auto-474762)       <model type='virtio'/>
	I0421 20:08:49.900254   65923 main.go:141] libmachine: (auto-474762)     </interface>
	I0421 20:08:49.900263   65923 main.go:141] libmachine: (auto-474762)     <serial type='pty'>
	I0421 20:08:49.900279   65923 main.go:141] libmachine: (auto-474762)       <target port='0'/>
	I0421 20:08:49.900291   65923 main.go:141] libmachine: (auto-474762)     </serial>
	I0421 20:08:49.900321   65923 main.go:141] libmachine: (auto-474762)     <console type='pty'>
	I0421 20:08:49.900346   65923 main.go:141] libmachine: (auto-474762)       <target type='serial' port='0'/>
	I0421 20:08:49.900361   65923 main.go:141] libmachine: (auto-474762)     </console>
	I0421 20:08:49.900372   65923 main.go:141] libmachine: (auto-474762)     <rng model='virtio'>
	I0421 20:08:49.900385   65923 main.go:141] libmachine: (auto-474762)       <backend model='random'>/dev/random</backend>
	I0421 20:08:49.900396   65923 main.go:141] libmachine: (auto-474762)     </rng>
	I0421 20:08:49.900407   65923 main.go:141] libmachine: (auto-474762)     
	I0421 20:08:49.900417   65923 main.go:141] libmachine: (auto-474762)     
	I0421 20:08:49.900428   65923 main.go:141] libmachine: (auto-474762)   </devices>
	I0421 20:08:49.900439   65923 main.go:141] libmachine: (auto-474762) </domain>
	I0421 20:08:49.900450   65923 main.go:141] libmachine: (auto-474762) 
	I0421 20:08:49.905151   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:9a:51:37 in network default
	I0421 20:08:49.905874   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:49.905907   65923 main.go:141] libmachine: (auto-474762) Ensuring networks are active...
	I0421 20:08:49.906926   65923 main.go:141] libmachine: (auto-474762) Ensuring network default is active
	I0421 20:08:49.907275   65923 main.go:141] libmachine: (auto-474762) Ensuring network mk-auto-474762 is active
	I0421 20:08:49.907863   65923 main.go:141] libmachine: (auto-474762) Getting domain xml...
	I0421 20:08:49.908526   65923 main.go:141] libmachine: (auto-474762) Creating domain...
	I0421 20:08:51.185186   65923 main.go:141] libmachine: (auto-474762) Waiting to get IP...
	I0421 20:08:51.186177   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:51.186634   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:51.186686   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:51.186620   65946 retry.go:31] will retry after 305.300911ms: waiting for machine to come up
	I0421 20:08:51.493327   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:51.493857   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:51.493885   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:51.493813   65946 retry.go:31] will retry after 294.594779ms: waiting for machine to come up
	I0421 20:08:51.790498   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:51.790951   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:51.790974   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:51.790904   65946 retry.go:31] will retry after 372.758275ms: waiting for machine to come up
	I0421 20:08:52.165655   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:52.166199   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:52.166230   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:52.166154   65946 retry.go:31] will retry after 571.80654ms: waiting for machine to come up
	I0421 20:08:52.739979   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:52.740682   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:52.740710   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:52.740616   65946 retry.go:31] will retry after 761.643666ms: waiting for machine to come up
	I0421 20:08:53.504369   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:53.505059   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:53.505088   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:53.504992   65946 retry.go:31] will retry after 670.246653ms: waiting for machine to come up
	I0421 20:08:54.176572   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:54.177010   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:54.177036   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:54.176968   65946 retry.go:31] will retry after 931.805863ms: waiting for machine to come up
	I0421 20:08:55.110228   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:55.110654   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:55.110691   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:55.110621   65946 retry.go:31] will retry after 1.401662565s: waiting for machine to come up
	I0421 20:08:56.514444   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:56.515043   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:56.515074   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:56.514984   65946 retry.go:31] will retry after 1.664988605s: waiting for machine to come up
	I0421 20:08:58.182109   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:58.182604   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:58.182633   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:58.182570   65946 retry.go:31] will retry after 1.621192199s: waiting for machine to come up
	I0421 20:08:59.805460   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:08:59.805915   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:08:59.805944   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:08:59.805875   65946 retry.go:31] will retry after 2.733828646s: waiting for machine to come up
	I0421 20:09:02.541868   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:09:02.542329   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:09:02.542351   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:09:02.542291   65946 retry.go:31] will retry after 2.372505522s: waiting for machine to come up
	I0421 20:09:04.916547   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:09:04.917113   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:09:04.917141   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:09:04.917048   65946 retry.go:31] will retry after 3.772964171s: waiting for machine to come up
	I0421 20:09:08.691027   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:09:08.691592   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find current IP address of domain auto-474762 in network mk-auto-474762
	I0421 20:09:08.691631   65923 main.go:141] libmachine: (auto-474762) DBG | I0421 20:09:08.691511   65946 retry.go:31] will retry after 5.511515663s: waiting for machine to come up
	I0421 20:09:14.207721   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:09:14.208192   65923 main.go:141] libmachine: (auto-474762) Found IP for machine: 192.168.50.171
	I0421 20:09:14.208218   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has current primary IP address 192.168.50.171 and MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:09:14.208228   65923 main.go:141] libmachine: (auto-474762) Reserving static IP address...
	I0421 20:09:14.208634   65923 main.go:141] libmachine: (auto-474762) DBG | unable to find host DHCP lease matching {name: "auto-474762", mac: "52:54:00:67:09:11", ip: "192.168.50.171"} in network mk-auto-474762
	
	
	==> CRI-O <==
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.686886175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730155686848807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9878953-58b6-472f-9fb7-498469b0215d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.687867665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94356afc-3f8f-4bef-ab53-82d15ae8561a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.687988344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94356afc-3f8f-4bef-ab53-82d15ae8561a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.688291980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94356afc-3f8f-4bef-ab53-82d15ae8561a name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.745790526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=622f65f9-7dd0-4b89-bafa-0e3188f62b68 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.745895344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=622f65f9-7dd0-4b89-bafa-0e3188f62b68 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.747635871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=747c9f06-a4b0-4a1a-b456-9774dda28fdd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.748176197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730155748142193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=747c9f06-a4b0-4a1a-b456-9774dda28fdd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.748807496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04f9b2a5-5f89-40ff-80a2-6a308f11989e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.748895548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04f9b2a5-5f89-40ff-80a2-6a308f11989e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.749470760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04f9b2a5-5f89-40ff-80a2-6a308f11989e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.800820667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf2e2357-8975-4e4e-bba7-8c7d9bcf19a5 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.800921075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf2e2357-8975-4e4e-bba7-8c7d9bcf19a5 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.803136016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=528d8bf8-92ec-45c0-935c-a036c9fb505b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.803655230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730155803630011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=528d8bf8-92ec-45c0-935c-a036c9fb505b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.804239857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afcd79b8-9e86-40d3-abc1-f318b766c444 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.804322946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afcd79b8-9e86-40d3-abc1-f318b766c444 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.805088492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afcd79b8-9e86-40d3-abc1-f318b766c444 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.843799469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb100987-7d40-428c-bfbf-437644696bfc name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.843882918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb100987-7d40-428c-bfbf-437644696bfc name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.846282499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21d416c0-b0ae-466a-8429-4624b0edb1a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.846749070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730155846723104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21d416c0-b0ae-466a-8429-4624b0edb1a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.847620215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=075bc480-febc-4b6a-bad9-442db2b452f0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.847701271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=075bc480-febc-4b6a-bad9-442db2b452f0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:15 no-preload-597568 crio[722]: time="2024-04-21 20:09:15.848026927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b,PodSandboxId:60edc6f949a1223090d0c6f89f1a1fe0d7490f94d85a9b8088fad675d1106971,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273327211615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1010760-2a31-4a34-9305-c77b99f8a38b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a875cc7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1,PodSandboxId:eb22644072be590dafc139a21493521d79d425bf7a8985da7b1222de3a0039e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729273260713045,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vh287,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 53438272-118b-444d-bdb3-96acac5d2185,},Annotations:map[string]string{io.kubernetes.container.hash: 9d74fc26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36,PodSandboxId:0e596db87b17527982e88694759b8ef35593dadceaef79399123dd9ced5e4d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1713729272493961554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4106db36-bd7f-448d-b545-803fd8606796,},Annotations:map[string]string{io.kubernetes.container.hash: e6a09ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32,PodSandboxId:c0faefa7d7740441f78b8fddea21ca8c8a4f533efd32ab896808d57d0d170b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713729271266126475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-km222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2107f9b-c9a7-4c4c-9969-d22441d95fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 347bed79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c,PodSandboxId:930b5ef33edb37e4aca996afd3143fc6ae4ec3fb37b31d322cd4addd9b241bb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729251370934552,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23ed8663ac1a2cc43f4e39b24c26d50,},Annotations:map[string]string{io.kubernetes.container.hash: af94ce97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d,PodSandboxId:850e8c2c86d8d784eff22c32ebe7297278a126c04bda61dc8764a23c16e49bce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729251362203012,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ed0d5107e534d80b49184d0dbd76260,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281,PodSandboxId:2cb5c65a4ccf6295957d915e5ad92e85144cb8e76bc5080b82109a5ed3769885,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729251335677606,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e85475da4dec2194823bf2d492d1eef0,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e,PodSandboxId:4116a636eac9c6a395ffb490cf9e2ab191eb6280ccbd02da3cf27bf40ab3d599,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729251221083801,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-597568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85f45122d39b5ba86d43ab85b1785dd,},Annotations:map[string]string{io.kubernetes.container.hash: 7a2cb59d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=075bc480-febc-4b6a-bad9-442db2b452f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb7b27fdecb0d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   60edc6f949a12       coredns-7db6d8ff4d-vtxv7
	7875176994a40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   eb22644072be5       coredns-7db6d8ff4d-vh287
	6afa1b4b5a5b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   0e596db87b175       storage-provisioner
	370d702a2b5cd       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   14 minutes ago      Running             kube-proxy                0                   c0faefa7d7740       kube-proxy-km222
	1bf9ee926ddc0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   930b5ef33edb3       etcd-no-preload-597568
	ede0e8fc4bf66       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   15 minutes ago      Running             kube-controller-manager   2                   850e8c2c86d8d       kube-controller-manager-no-preload-597568
	a9f121b4732e0       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   15 minutes ago      Running             kube-scheduler            2                   2cb5c65a4ccf6       kube-scheduler-no-preload-597568
	f525f9081ae7b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   15 minutes ago      Running             kube-apiserver            2                   4116a636eac9c       kube-apiserver-no-preload-597568
	
	
	==> coredns [7875176994a40350e7f7543c886197f101cd51d9ed72205e4de95880ca36ccd1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bb7b27fdecb0d69bb219c7293f9ee4c6f29bed5cfea64627a8ee40443db3795b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-597568
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-597568
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=no-preload-597568
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_54_17_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:54:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-597568
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:09:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:04:48 +0000   Sun, 21 Apr 2024 19:54:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:04:48 +0000   Sun, 21 Apr 2024 19:54:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:04:48 +0000   Sun, 21 Apr 2024 19:54:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:04:48 +0000   Sun, 21 Apr 2024 19:54:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    no-preload-597568
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43837302ba054f0dabdbc5eba4081f11
	  System UUID:                43837302-ba05-4f0d-abdb-c5eba4081f11
	  Boot ID:                    79e64129-0fbc-4036-928a-66c5cf129043
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-vh287                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-vtxv7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-597568                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-597568             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-597568    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-km222                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-597568             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-p9f9x              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-597568 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-597568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-597568 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-597568 event: Registered Node no-preload-597568 in Controller
	
	
	==> dmesg <==
	[  +0.044183] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662461] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.527263] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.697682] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.636411] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058976] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072806] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.205746] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.136530] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.307706] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Apr21 19:49] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.054679] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.992237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +3.014645] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.864279] kauditd_printk_skb: 53 callbacks suppressed
	[ +11.041209] kauditd_printk_skb: 24 callbacks suppressed
	[Apr21 19:54] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.632779] systemd-fstab-generator[4064]: Ignoring "noauto" option for root device
	[  +4.579608] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.991666] systemd-fstab-generator[4390]: Ignoring "noauto" option for root device
	[ +14.414700] systemd-fstab-generator[4595]: Ignoring "noauto" option for root device
	[  +0.080240] kauditd_printk_skb: 14 callbacks suppressed
	[Apr21 19:55] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [1bf9ee926ddc0736e8db097f1dec56b60f367dc94e83b3f2a5a992df143fec7c] <==
	{"level":"info","ts":"2024-04-21T19:54:12.079138Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"af2c917f7a70ddd0","local-member-attributes":"{Name:no-preload-597568 ClientURLs:[https://192.168.39.120:2379]}","request-path":"/0/members/af2c917f7a70ddd0/attributes","cluster-id":"f3de5e1602edc73b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:54:12.079315Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.079525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:54:12.081443Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:54:12.081485Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T19:54:12.081519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.081567Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.081581Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:54:12.081626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:54:12.086145Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.120:2379"}
	{"level":"info","ts":"2024-04-21T19:54:12.098856Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T20:00:03.332345Z","caller":"traceutil/trace.go:171","msg":"trace[83220497] transaction","detail":"{read_only:false; response_revision:722; number_of_response:1; }","duration":"572.548804ms","start":"2024-04-21T20:00:02.759721Z","end":"2024-04-21T20:00:03.33227Z","steps":["trace[83220497] 'process raft request'  (duration: 572.303848ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:00:03.332439Z","caller":"traceutil/trace.go:171","msg":"trace[1553232862] linearizableReadLoop","detail":"{readStateIndex:807; appliedIndex:807; }","duration":"146.881785ms","start":"2024-04-21T20:00:03.185469Z","end":"2024-04-21T20:00:03.332351Z","steps":["trace[1553232862] 'read index received'  (duration: 146.872528ms)","trace[1553232862] 'applied index is now lower than readState.Index'  (duration: 7.607µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:00:03.332714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.136527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:00:03.334356Z","caller":"traceutil/trace.go:171","msg":"trace[1848858816] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:722; }","duration":"148.893138ms","start":"2024-04-21T20:00:03.185442Z","end":"2024-04-21T20:00:03.334335Z","steps":["trace[1848858816] 'agreement among raft nodes before linearized reading'  (duration: 147.099037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:00:03.33453Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:00:02.759702Z","time spent":"573.631438ms","remote":"127.0.0.1:55998","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:720 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-21T20:00:03.799342Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.295474ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15983432317249636202 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-597568\" mod_revision:715 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-21T20:00:03.799523Z","caller":"traceutil/trace.go:171","msg":"trace[1524348268] transaction","detail":"{read_only:false; response_revision:723; number_of_response:1; }","duration":"537.425042ms","start":"2024-04-21T20:00:03.262084Z","end":"2024-04-21T20:00:03.799509Z","steps":["trace[1524348268] 'process raft request'  (duration: 199.69156ms)","trace[1524348268] 'compare'  (duration: 337.043778ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:00:03.799586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:00:03.262067Z","time spent":"537.486143ms","remote":"127.0.0.1:56098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":556,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-597568\" mod_revision:715 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-597568\" > >"}
	{"level":"info","ts":"2024-04-21T20:04:12.242701Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":679}
	{"level":"info","ts":"2024-04-21T20:04:12.254653Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":679,"took":"11.501199ms","hash":2384450383,"current-db-size-bytes":2134016,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2134016,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-04-21T20:04:12.254723Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2384450383,"revision":679,"compact-revision":-1}
	{"level":"info","ts":"2024-04-21T20:09:12.252614Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2024-04-21T20:09:12.257266Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":922,"took":"3.901481ms","hash":1207815653,"current-db-size-bytes":2134016,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-21T20:09:12.257528Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1207815653,"revision":922,"compact-revision":679}
	
	
	==> kernel <==
	 20:09:16 up 20 min,  0 users,  load average: 0.29, 0.16, 0.11
	Linux no-preload-597568 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f525f9081ae7b08afb59fce6577063d53efeb3ed3f6fb898e00431107946193e] <==
	I0421 20:04:15.087551       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:05:15.086437       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:05:15.086775       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:05:15.086838       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:05:15.087726       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:05:15.087785       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:05:15.087954       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:07:15.087607       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:07:15.087954       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:07:15.087996       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:07:15.088195       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:07:15.088273       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:07:15.089067       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:09:14.091125       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:09:14.091343       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0421 20:09:15.091646       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:09:15.091754       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:09:15.091763       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:09:15.091656       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:09:15.091817       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:09:15.092808       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ede0e8fc4bf66c68da2dac99110d3a1c4d30d92e09756d176611464c23c8077d] <==
	I0421 20:03:31.483104       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:04:00.963530       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:04:01.492150       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:04:30.970684       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:04:31.502502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:05:00.978805       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:05:01.522856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:05:30.994882       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:05:31.531616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:05:33.011227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="238.46µs"
	I0421 20:05:47.999756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="107.401µs"
	E0421 20:06:00.999884       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:06:01.539516       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:06:31.007020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:06:31.548689       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:07:01.013640       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:07:01.562207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:07:31.020840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:07:31.573151       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:08:01.026424       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:08:01.584303       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:08:31.031874       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:08:31.594241       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:09:01.039925       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:09:01.615257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [370d702a2b5cd0ba17e13a4cf0f54e5a3934202b85e8208181da94b52de5ce32] <==
	I0421 19:54:31.502670       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:54:31.523322       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	I0421 19:54:31.605900       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:54:31.605963       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:54:31.606001       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:54:31.613091       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:54:31.613268       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:54:31.613310       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:54:31.614830       1 config.go:192] "Starting service config controller"
	I0421 19:54:31.614844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:54:31.614864       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:54:31.614867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:54:31.615156       1 config.go:319] "Starting node config controller"
	I0421 19:54:31.615195       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:54:31.715927       1 shared_informer.go:320] Caches are synced for node config
	I0421 19:54:31.715958       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:54:31.715989       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a9f121b4732e0ec587b2cd4246b11cc13bbe3b8707184a635a568de3f9730281] <==
	W0421 19:54:15.043354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 19:54:15.043472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:54:15.057487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 19:54:15.057540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 19:54:15.070311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:54:15.070503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:54:15.084859       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:54:15.085123       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:54:15.112854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 19:54:15.113044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 19:54:15.118974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:54:15.119004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:54:15.139305       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 19:54:15.141052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 19:54:15.161457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 19:54:15.161654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 19:54:15.232530       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 19:54:15.232660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 19:54:15.249583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:54:15.249736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:54:15.519244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:54:15.519302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:54:15.542666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:54:15.542730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0421 19:54:17.034699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:06:17 no-preload-597568 kubelet[4398]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:06:27 no-preload-597568 kubelet[4398]: E0421 20:06:27.982724    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:06:42 no-preload-597568 kubelet[4398]: E0421 20:06:42.986690    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:06:53 no-preload-597568 kubelet[4398]: E0421 20:06:53.984468    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:07:05 no-preload-597568 kubelet[4398]: E0421 20:07:05.984357    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:07:17 no-preload-597568 kubelet[4398]: E0421 20:07:17.042618    4398 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:07:17 no-preload-597568 kubelet[4398]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:07:17 no-preload-597568 kubelet[4398]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:07:17 no-preload-597568 kubelet[4398]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:07:17 no-preload-597568 kubelet[4398]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:07:18 no-preload-597568 kubelet[4398]: E0421 20:07:18.983921    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:07:29 no-preload-597568 kubelet[4398]: E0421 20:07:29.983756    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:07:43 no-preload-597568 kubelet[4398]: E0421 20:07:43.983983    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:07:56 no-preload-597568 kubelet[4398]: E0421 20:07:56.983814    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:08:10 no-preload-597568 kubelet[4398]: E0421 20:08:10.984603    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:08:17 no-preload-597568 kubelet[4398]: E0421 20:08:17.044917    4398 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:08:17 no-preload-597568 kubelet[4398]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:08:17 no-preload-597568 kubelet[4398]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:08:17 no-preload-597568 kubelet[4398]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:08:17 no-preload-597568 kubelet[4398]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:08:21 no-preload-597568 kubelet[4398]: E0421 20:08:21.983720    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:08:34 no-preload-597568 kubelet[4398]: E0421 20:08:34.983960    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:08:46 no-preload-597568 kubelet[4398]: E0421 20:08:46.985054    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:09:00 no-preload-597568 kubelet[4398]: E0421 20:09:00.984994    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	Apr 21 20:09:15 no-preload-597568 kubelet[4398]: E0421 20:09:15.984861    4398 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-p9f9x" podUID="0d589795-3d85-4d06-9647-f4426e705f34"
	
	
	==> storage-provisioner [6afa1b4b5a5b4396c7fb78c3ce80cc431981bc12350737e55e38d276e0d07f36] <==
	I0421 19:54:32.683932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 19:54:32.796528       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 19:54:32.796680       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 19:54:32.815748       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 19:54:32.816004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-597568_91fac8ad-76ca-4123-b811-61aedbd9e6e6!
	I0421 19:54:32.822868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"098c7d3e-8032-4e18-b0a7-71897245390c", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-597568_91fac8ad-76ca-4123-b811-61aedbd9e6e6 became leader
	I0421 19:54:32.916991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-597568_91fac8ad-76ca-4123-b811-61aedbd9e6e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-597568 -n no-preload-597568
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-597568 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-p9f9x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-597568 describe pod metrics-server-569cc877fc-p9f9x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-597568 describe pod metrics-server-569cc877fc-p9f9x: exit status 1 (69.186808ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-p9f9x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-597568 describe pod metrics-server-569cc877fc-p9f9x: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (338.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (317.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-21 20:09:41.795858115 +0000 UTC m=+6490.319993557
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-167454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.302µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-167454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-167454 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-167454 logs -n 25: (1.390146127s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC | 21 Apr 24 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:47 UTC |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-364614             | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-364614                  | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-364614 image list                           | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-411651 | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | disable-driver-mounts-411651                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-727235            | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC | 21 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-727235                 | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC | 21 Apr 24 20:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 20:08 UTC | 21 Apr 24 20:08 UTC |
	| start   | -p auto-474762 --memory=3072                           | auto-474762                  | jenkins | v1.33.0 | 21 Apr 24 20:08 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 20:09 UTC | 21 Apr 24 20:09 UTC |
	| start   | -p kindnet-474762                                      | kindnet-474762               | jenkins | v1.33.0 | 21 Apr 24 20:09 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 20:09:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 20:09:18.205620   66415 out.go:291] Setting OutFile to fd 1 ...
	I0421 20:09:18.205951   66415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:09:18.205965   66415 out.go:304] Setting ErrFile to fd 2...
	I0421 20:09:18.205972   66415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:09:18.206244   66415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 20:09:18.206950   66415 out.go:298] Setting JSON to false
	I0421 20:09:18.208123   66415 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6656,"bootTime":1713723502,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 20:09:18.208186   66415 start.go:139] virtualization: kvm guest
	I0421 20:09:18.210743   66415 out.go:177] * [kindnet-474762] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 20:09:18.212697   66415 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 20:09:18.212728   66415 notify.go:220] Checking for updates...
	I0421 20:09:18.214333   66415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 20:09:18.215778   66415 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:09:18.217155   66415 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:09:18.218469   66415 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 20:09:18.219575   66415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 20:09:18.221333   66415 config.go:182] Loaded profile config "auto-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:09:18.221485   66415 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:09:18.221605   66415 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:09:18.221745   66415 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 20:09:18.270822   66415 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 20:09:18.272399   66415 start.go:297] selected driver: kvm2
	I0421 20:09:18.272414   66415 start.go:901] validating driver "kvm2" against <nil>
	I0421 20:09:18.272443   66415 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 20:09:18.273404   66415 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:09:18.273491   66415 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 20:09:18.291981   66415 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 20:09:18.292026   66415 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 20:09:18.292254   66415 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:09:18.292342   66415 cni.go:84] Creating CNI manager for "kindnet"
	I0421 20:09:18.292359   66415 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0421 20:09:18.292446   66415 start.go:340] cluster config:
	{Name:kindnet-474762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:09:18.292555   66415 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:09:18.294441   66415 out.go:177] * Starting "kindnet-474762" primary control-plane node in "kindnet-474762" cluster
	I0421 20:09:18.295812   66415 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:09:18.295866   66415 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 20:09:18.295882   66415 cache.go:56] Caching tarball of preloaded images
	I0421 20:09:18.295979   66415 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 20:09:18.295994   66415 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 20:09:18.296132   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/config.json ...
	I0421 20:09:18.296157   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/config.json: {Name:mkd4b7d1e82a9a013fa50ee3619cd40e5cd3e29f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:18.296299   66415 start.go:360] acquireMachinesLock for kindnet-474762: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:09:18.296329   66415 start.go:364] duration metric: took 17.819µs to acquireMachinesLock for "kindnet-474762"
	I0421 20:09:18.296348   66415 start.go:93] Provisioning new machine with config: &{Name:kindnet-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterN
ame:kindnet-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:09:18.296438   66415 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 20:09:17.828192   65923 main.go:141] libmachine: (auto-474762) Calling .GetIP
	I0421 20:09:17.868765   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:09:17.869355   65923 main.go:141] libmachine: (auto-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:09:11", ip: ""} in network mk-auto-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:09:05 +0000 UTC Type:0 Mac:52:54:00:67:09:11 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:auto-474762 Clientid:01:52:54:00:67:09:11}
	I0421 20:09:17.869386   65923 main.go:141] libmachine: (auto-474762) DBG | domain auto-474762 has defined IP address 192.168.50.171 and MAC address 52:54:00:67:09:11 in network mk-auto-474762
	I0421 20:09:17.869596   65923 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0421 20:09:17.875182   65923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:09:17.890826   65923 kubeadm.go:877] updating cluster {Name:auto-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-474762 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:09:17.890945   65923 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:09:17.891001   65923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:09:17.931706   65923 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 20:09:17.931775   65923 ssh_runner.go:195] Run: which lz4
	I0421 20:09:17.936738   65923 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 20:09:17.941610   65923 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 20:09:17.941657   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 20:09:18.299490   66415 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0421 20:09:18.299647   66415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:09:18.299707   66415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:09:18.317993   66415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0421 20:09:18.318578   66415 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:09:18.319303   66415 main.go:141] libmachine: Using API Version  1
	I0421 20:09:18.319330   66415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:09:18.319902   66415 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:09:18.320131   66415 main.go:141] libmachine: (kindnet-474762) Calling .GetMachineName
	I0421 20:09:18.320330   66415 main.go:141] libmachine: (kindnet-474762) Calling .DriverName
	I0421 20:09:18.320527   66415 start.go:159] libmachine.API.Create for "kindnet-474762" (driver="kvm2")
	I0421 20:09:18.320583   66415 client.go:168] LocalClient.Create starting
	I0421 20:09:18.320628   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 20:09:18.320669   66415 main.go:141] libmachine: Decoding PEM data...
	I0421 20:09:18.320693   66415 main.go:141] libmachine: Parsing certificate...
	I0421 20:09:18.320764   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 20:09:18.320799   66415 main.go:141] libmachine: Decoding PEM data...
	I0421 20:09:18.320821   66415 main.go:141] libmachine: Parsing certificate...
	I0421 20:09:18.320847   66415 main.go:141] libmachine: Running pre-create checks...
	I0421 20:09:18.320866   66415 main.go:141] libmachine: (kindnet-474762) Calling .PreCreateCheck
	I0421 20:09:18.321258   66415 main.go:141] libmachine: (kindnet-474762) Calling .GetConfigRaw
	I0421 20:09:18.321764   66415 main.go:141] libmachine: Creating machine...
	I0421 20:09:18.321781   66415 main.go:141] libmachine: (kindnet-474762) Calling .Create
	I0421 20:09:18.321917   66415 main.go:141] libmachine: (kindnet-474762) Creating KVM machine...
	I0421 20:09:18.323610   66415 main.go:141] libmachine: (kindnet-474762) DBG | found existing default KVM network
	I0421 20:09:18.325624   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:18.325438   66438 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000229560}
	I0421 20:09:18.325683   66415 main.go:141] libmachine: (kindnet-474762) DBG | created network xml: 
	I0421 20:09:18.325728   66415 main.go:141] libmachine: (kindnet-474762) DBG | <network>
	I0421 20:09:18.325752   66415 main.go:141] libmachine: (kindnet-474762) DBG |   <name>mk-kindnet-474762</name>
	I0421 20:09:18.325765   66415 main.go:141] libmachine: (kindnet-474762) DBG |   <dns enable='no'/>
	I0421 20:09:18.325772   66415 main.go:141] libmachine: (kindnet-474762) DBG |   
	I0421 20:09:18.325782   66415 main.go:141] libmachine: (kindnet-474762) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0421 20:09:18.325794   66415 main.go:141] libmachine: (kindnet-474762) DBG |     <dhcp>
	I0421 20:09:18.325808   66415 main.go:141] libmachine: (kindnet-474762) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0421 20:09:18.325819   66415 main.go:141] libmachine: (kindnet-474762) DBG |     </dhcp>
	I0421 20:09:18.325844   66415 main.go:141] libmachine: (kindnet-474762) DBG |   </ip>
	I0421 20:09:18.325857   66415 main.go:141] libmachine: (kindnet-474762) DBG |   
	I0421 20:09:18.325870   66415 main.go:141] libmachine: (kindnet-474762) DBG | </network>
	I0421 20:09:18.325876   66415 main.go:141] libmachine: (kindnet-474762) DBG | 
	I0421 20:09:18.331939   66415 main.go:141] libmachine: (kindnet-474762) DBG | trying to create private KVM network mk-kindnet-474762 192.168.39.0/24...
	I0421 20:09:18.423083   66415 main.go:141] libmachine: (kindnet-474762) DBG | private KVM network mk-kindnet-474762 192.168.39.0/24 created
	I0421 20:09:18.423130   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:18.423085   66438 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:09:18.423149   66415 main.go:141] libmachine: (kindnet-474762) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762 ...
	I0421 20:09:18.423162   66415 main.go:141] libmachine: (kindnet-474762) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 20:09:18.423226   66415 main.go:141] libmachine: (kindnet-474762) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 20:09:18.703500   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:18.703368   66438 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762/id_rsa...
	I0421 20:09:18.988243   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:18.988107   66438 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762/kindnet-474762.rawdisk...
	I0421 20:09:18.988275   66415 main.go:141] libmachine: (kindnet-474762) DBG | Writing magic tar header
	I0421 20:09:18.988309   66415 main.go:141] libmachine: (kindnet-474762) DBG | Writing SSH key tar header
	I0421 20:09:18.988328   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:18.988256   66438 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762 ...
	I0421 20:09:18.988388   66415 main.go:141] libmachine: (kindnet-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762
	I0421 20:09:18.988423   66415 main.go:141] libmachine: (kindnet-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 20:09:18.988434   66415 main.go:141] libmachine: (kindnet-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:09:18.988446   66415 main.go:141] libmachine: (kindnet-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762 (perms=drwx------)
	I0421 20:09:18.988456   66415 main.go:141] libmachine: (kindnet-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 20:09:18.988470   66415 main.go:141] libmachine: (kindnet-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 20:09:18.988478   66415 main.go:141] libmachine: (kindnet-474762) DBG | Checking permissions on dir: /home/jenkins
	I0421 20:09:18.988488   66415 main.go:141] libmachine: (kindnet-474762) DBG | Checking permissions on dir: /home
	I0421 20:09:18.988496   66415 main.go:141] libmachine: (kindnet-474762) DBG | Skipping /home - not owner
	I0421 20:09:18.988510   66415 main.go:141] libmachine: (kindnet-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 20:09:18.988520   66415 main.go:141] libmachine: (kindnet-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 20:09:18.988584   66415 main.go:141] libmachine: (kindnet-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 20:09:18.988619   66415 main.go:141] libmachine: (kindnet-474762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 20:09:18.988638   66415 main.go:141] libmachine: (kindnet-474762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 20:09:18.988649   66415 main.go:141] libmachine: (kindnet-474762) Creating domain...
	I0421 20:09:18.989880   66415 main.go:141] libmachine: (kindnet-474762) define libvirt domain using xml: 
	I0421 20:09:18.989904   66415 main.go:141] libmachine: (kindnet-474762) <domain type='kvm'>
	I0421 20:09:18.989933   66415 main.go:141] libmachine: (kindnet-474762)   <name>kindnet-474762</name>
	I0421 20:09:18.989956   66415 main.go:141] libmachine: (kindnet-474762)   <memory unit='MiB'>3072</memory>
	I0421 20:09:18.989973   66415 main.go:141] libmachine: (kindnet-474762)   <vcpu>2</vcpu>
	I0421 20:09:18.989983   66415 main.go:141] libmachine: (kindnet-474762)   <features>
	I0421 20:09:18.989992   66415 main.go:141] libmachine: (kindnet-474762)     <acpi/>
	I0421 20:09:18.989999   66415 main.go:141] libmachine: (kindnet-474762)     <apic/>
	I0421 20:09:18.990007   66415 main.go:141] libmachine: (kindnet-474762)     <pae/>
	I0421 20:09:18.990029   66415 main.go:141] libmachine: (kindnet-474762)     
	I0421 20:09:18.990044   66415 main.go:141] libmachine: (kindnet-474762)   </features>
	I0421 20:09:18.990069   66415 main.go:141] libmachine: (kindnet-474762)   <cpu mode='host-passthrough'>
	I0421 20:09:18.990082   66415 main.go:141] libmachine: (kindnet-474762)   
	I0421 20:09:18.990093   66415 main.go:141] libmachine: (kindnet-474762)   </cpu>
	I0421 20:09:18.990101   66415 main.go:141] libmachine: (kindnet-474762)   <os>
	I0421 20:09:18.990108   66415 main.go:141] libmachine: (kindnet-474762)     <type>hvm</type>
	I0421 20:09:18.990131   66415 main.go:141] libmachine: (kindnet-474762)     <boot dev='cdrom'/>
	I0421 20:09:18.990139   66415 main.go:141] libmachine: (kindnet-474762)     <boot dev='hd'/>
	I0421 20:09:18.990147   66415 main.go:141] libmachine: (kindnet-474762)     <bootmenu enable='no'/>
	I0421 20:09:18.990158   66415 main.go:141] libmachine: (kindnet-474762)   </os>
	I0421 20:09:18.990173   66415 main.go:141] libmachine: (kindnet-474762)   <devices>
	I0421 20:09:18.990185   66415 main.go:141] libmachine: (kindnet-474762)     <disk type='file' device='cdrom'>
	I0421 20:09:18.990203   66415 main.go:141] libmachine: (kindnet-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762/boot2docker.iso'/>
	I0421 20:09:18.990215   66415 main.go:141] libmachine: (kindnet-474762)       <target dev='hdc' bus='scsi'/>
	I0421 20:09:18.990227   66415 main.go:141] libmachine: (kindnet-474762)       <readonly/>
	I0421 20:09:18.990238   66415 main.go:141] libmachine: (kindnet-474762)     </disk>
	I0421 20:09:18.990251   66415 main.go:141] libmachine: (kindnet-474762)     <disk type='file' device='disk'>
	I0421 20:09:18.990260   66415 main.go:141] libmachine: (kindnet-474762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 20:09:18.990277   66415 main.go:141] libmachine: (kindnet-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/kindnet-474762/kindnet-474762.rawdisk'/>
	I0421 20:09:18.990284   66415 main.go:141] libmachine: (kindnet-474762)       <target dev='hda' bus='virtio'/>
	I0421 20:09:18.990292   66415 main.go:141] libmachine: (kindnet-474762)     </disk>
	I0421 20:09:18.990300   66415 main.go:141] libmachine: (kindnet-474762)     <interface type='network'>
	I0421 20:09:18.990325   66415 main.go:141] libmachine: (kindnet-474762)       <source network='mk-kindnet-474762'/>
	I0421 20:09:18.990335   66415 main.go:141] libmachine: (kindnet-474762)       <model type='virtio'/>
	I0421 20:09:18.990343   66415 main.go:141] libmachine: (kindnet-474762)     </interface>
	I0421 20:09:18.990350   66415 main.go:141] libmachine: (kindnet-474762)     <interface type='network'>
	I0421 20:09:18.990359   66415 main.go:141] libmachine: (kindnet-474762)       <source network='default'/>
	I0421 20:09:18.990378   66415 main.go:141] libmachine: (kindnet-474762)       <model type='virtio'/>
	I0421 20:09:18.990397   66415 main.go:141] libmachine: (kindnet-474762)     </interface>
	I0421 20:09:18.990408   66415 main.go:141] libmachine: (kindnet-474762)     <serial type='pty'>
	I0421 20:09:18.990423   66415 main.go:141] libmachine: (kindnet-474762)       <target port='0'/>
	I0421 20:09:18.990430   66415 main.go:141] libmachine: (kindnet-474762)     </serial>
	I0421 20:09:18.990438   66415 main.go:141] libmachine: (kindnet-474762)     <console type='pty'>
	I0421 20:09:18.990446   66415 main.go:141] libmachine: (kindnet-474762)       <target type='serial' port='0'/>
	I0421 20:09:18.990462   66415 main.go:141] libmachine: (kindnet-474762)     </console>
	I0421 20:09:18.990473   66415 main.go:141] libmachine: (kindnet-474762)     <rng model='virtio'>
	I0421 20:09:18.990482   66415 main.go:141] libmachine: (kindnet-474762)       <backend model='random'>/dev/random</backend>
	I0421 20:09:18.990492   66415 main.go:141] libmachine: (kindnet-474762)     </rng>
	I0421 20:09:18.990500   66415 main.go:141] libmachine: (kindnet-474762)     
	I0421 20:09:18.990509   66415 main.go:141] libmachine: (kindnet-474762)     
	I0421 20:09:18.990517   66415 main.go:141] libmachine: (kindnet-474762)   </devices>
	I0421 20:09:18.990527   66415 main.go:141] libmachine: (kindnet-474762) </domain>
	I0421 20:09:18.990536   66415 main.go:141] libmachine: (kindnet-474762) 
	I0421 20:09:18.995491   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:ed:73:21 in network default
	I0421 20:09:18.996178   66415 main.go:141] libmachine: (kindnet-474762) Ensuring networks are active...
	I0421 20:09:18.996204   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:18.996920   66415 main.go:141] libmachine: (kindnet-474762) Ensuring network default is active
	I0421 20:09:18.997385   66415 main.go:141] libmachine: (kindnet-474762) Ensuring network mk-kindnet-474762 is active
	I0421 20:09:18.998270   66415 main.go:141] libmachine: (kindnet-474762) Getting domain xml...
	I0421 20:09:18.999029   66415 main.go:141] libmachine: (kindnet-474762) Creating domain...
	I0421 20:09:20.481671   66415 main.go:141] libmachine: (kindnet-474762) Waiting to get IP...
	I0421 20:09:20.482732   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:20.483233   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:20.483274   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:20.483215   66438 retry.go:31] will retry after 222.841152ms: waiting for machine to come up
	I0421 20:09:20.707851   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:20.708481   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:20.708517   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:20.708438   66438 retry.go:31] will retry after 337.044818ms: waiting for machine to come up
	I0421 20:09:21.046952   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:21.047507   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:21.047540   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:21.047463   66438 retry.go:31] will retry after 340.922776ms: waiting for machine to come up
	I0421 20:09:21.390192   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:21.390787   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:21.390809   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:21.390734   66438 retry.go:31] will retry after 574.926795ms: waiting for machine to come up
	I0421 20:09:21.967713   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:21.968391   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:21.968422   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:21.968336   66438 retry.go:31] will retry after 595.909133ms: waiting for machine to come up
	I0421 20:09:22.566421   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:22.566959   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:22.566989   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:22.566924   66438 retry.go:31] will retry after 767.199819ms: waiting for machine to come up
	I0421 20:09:19.734685   65923 crio.go:462] duration metric: took 1.798001322s to copy over tarball
	I0421 20:09:19.734786   65923 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:09:22.643978   65923 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.909168237s)
	I0421 20:09:22.644002   65923 crio.go:469] duration metric: took 2.909292251s to extract the tarball
	I0421 20:09:22.644010   65923 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:09:22.699687   65923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:09:22.755320   65923 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:09:22.755344   65923 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:09:22.755353   65923 kubeadm.go:928] updating node { 192.168.50.171 8443 v1.30.0 crio true true} ...
	I0421 20:09:22.755471   65923 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-474762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:auto-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:09:22.755546   65923 ssh_runner.go:195] Run: crio config
	I0421 20:09:22.809963   65923 cni.go:84] Creating CNI manager for ""
	I0421 20:09:22.809990   65923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:09:22.810004   65923 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:09:22.810033   65923 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-474762 NodeName:auto-474762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:09:22.810246   65923 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-474762"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:09:22.810313   65923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:09:22.823533   65923 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:09:22.823616   65923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:09:22.836794   65923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0421 20:09:22.859664   65923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:09:22.881451   65923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0421 20:09:22.904189   65923 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I0421 20:09:22.909201   65923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:09:22.926116   65923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:09:23.073893   65923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:09:23.095723   65923 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762 for IP: 192.168.50.171
	I0421 20:09:23.095749   65923 certs.go:194] generating shared ca certs ...
	I0421 20:09:23.095769   65923 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:23.095964   65923 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:09:23.096017   65923 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:09:23.096030   65923 certs.go:256] generating profile certs ...
	I0421 20:09:23.096095   65923 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.key
	I0421 20:09:23.096111   65923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt with IP's: []
	I0421 20:09:23.201951   65923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt ...
	I0421 20:09:23.201983   65923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: {Name:mk40867ed8a8cf192d397b3b72c11a46ade1a0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:23.202178   65923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.key ...
	I0421 20:09:23.202192   65923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.key: {Name:mk363027a0de36d0bf3f0ae81496fb26aa411cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:23.202289   65923 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.key.f01deaa7
	I0421 20:09:23.202305   65923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.crt.f01deaa7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.171]
	I0421 20:09:23.279961   65923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.crt.f01deaa7 ...
	I0421 20:09:23.279987   65923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.crt.f01deaa7: {Name:mk0d3a727b5c2e9dac6a4c3f94e99f1a7f41df3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:23.280134   65923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.key.f01deaa7 ...
	I0421 20:09:23.280153   65923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.key.f01deaa7: {Name:mk47a48fe78c506a613ff31efc61cb7ec83d40e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:23.280225   65923 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.crt.f01deaa7 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.crt
	I0421 20:09:23.280304   65923 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.key.f01deaa7 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.key
	I0421 20:09:23.280356   65923 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.key
	I0421 20:09:23.280369   65923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.crt with IP's: []
	I0421 20:09:23.599928   65923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.crt ...
	I0421 20:09:23.599959   65923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.crt: {Name:mk86328762523e0bed806f6adeed6fa002cb6e86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:23.600134   65923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.key ...
	I0421 20:09:23.600148   65923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.key: {Name:mke06a51d232b19176414d9bf2ef82551c6125ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:23.600333   65923 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:09:23.600388   65923 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:09:23.600399   65923 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:09:23.600441   65923 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:09:23.600474   65923 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:09:23.600505   65923 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:09:23.600564   65923 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:09:23.601161   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:09:23.633430   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:09:23.663878   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:09:23.692983   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:09:23.721153   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0421 20:09:23.751833   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:09:23.783074   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:09:23.820596   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:09:23.854331   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:09:23.888907   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:09:23.919383   65923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:09:23.949598   65923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:09:23.971828   65923 ssh_runner.go:195] Run: openssl version
	I0421 20:09:23.978562   65923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:09:23.992945   65923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:23.998565   65923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:23.998651   65923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:24.007276   65923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:09:24.025050   65923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:09:24.039599   65923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:09:24.044992   65923 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:09:24.045053   65923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:09:24.051877   65923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:09:24.067450   65923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:09:24.082033   65923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:09:24.087967   65923 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:09:24.088042   65923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:09:24.094785   65923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:09:24.109888   65923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:09:24.115638   65923 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:09:24.115711   65923 kubeadm.go:391] StartCluster: {Name:auto-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-474762 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:09:24.115795   65923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:09:24.115869   65923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:09:24.160225   65923 cri.go:89] found id: ""
	I0421 20:09:24.160300   65923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:09:24.174163   65923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:09:24.186774   65923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:09:24.202636   65923 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:09:24.202660   65923 kubeadm.go:156] found existing configuration files:
	
	I0421 20:09:24.202707   65923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:09:24.214744   65923 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:09:24.214810   65923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:09:24.227132   65923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:09:24.238565   65923 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:09:24.238634   65923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:09:24.251790   65923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:09:24.393070   65923 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:09:24.393165   65923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:09:24.405463   65923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:09:24.417847   65923 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:09:24.417929   65923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:09:24.431542   65923 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:09:24.495017   65923 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:09:24.495110   65923 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:09:24.676506   65923 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:09:24.676636   65923 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:09:24.676769   65923 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:09:24.903178   65923 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:09:24.905010   65923 out.go:204]   - Generating certificates and keys ...
	I0421 20:09:24.905176   65923 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:09:24.905283   65923 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:09:25.061214   65923 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 20:09:25.138383   65923 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 20:09:25.314252   65923 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 20:09:25.432539   65923 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 20:09:25.577531   65923 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 20:09:25.577852   65923 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-474762 localhost] and IPs [192.168.50.171 127.0.0.1 ::1]
	I0421 20:09:25.737617   65923 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 20:09:25.737932   65923 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-474762 localhost] and IPs [192.168.50.171 127.0.0.1 ::1]
	I0421 20:09:25.858881   65923 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 20:09:26.219269   65923 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 20:09:26.420829   65923 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 20:09:26.421102   65923 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:09:26.607319   65923 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:09:26.727789   65923 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:09:26.982474   65923 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:09:27.162729   65923 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:09:27.211494   65923 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:09:27.212138   65923 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:09:27.214754   65923 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:09:23.335410   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:23.335915   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:23.335943   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:23.335871   66438 retry.go:31] will retry after 735.90689ms: waiting for machine to come up
	I0421 20:09:24.073347   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:24.073818   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:24.073853   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:24.073762   66438 retry.go:31] will retry after 1.06244669s: waiting for machine to come up
	I0421 20:09:25.137311   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:25.137793   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:25.137816   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:25.137757   66438 retry.go:31] will retry after 1.548041089s: waiting for machine to come up
	I0421 20:09:26.687277   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:26.687794   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:26.687824   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:26.687764   66438 retry.go:31] will retry after 2.209717626s: waiting for machine to come up
	I0421 20:09:27.216677   65923 out.go:204]   - Booting up control plane ...
	I0421 20:09:27.216801   65923 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:09:27.216901   65923 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:09:27.218664   65923 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:09:27.239816   65923 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:09:27.241190   65923 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:09:27.241261   65923 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:09:27.411040   65923 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:09:27.411119   65923 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:09:27.912169   65923 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.216041ms
	I0421 20:09:27.912302   65923 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:09:28.899119   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:28.899617   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:28.899652   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:28.899562   66438 retry.go:31] will retry after 2.630634877s: waiting for machine to come up
	I0421 20:09:31.531412   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:31.531992   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:31.532027   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:31.531939   66438 retry.go:31] will retry after 3.615697175s: waiting for machine to come up
	I0421 20:09:33.414230   65923 kubeadm.go:309] [api-check] The API server is healthy after 5.501578841s
	I0421 20:09:33.431188   65923 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:09:33.460916   65923 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:09:33.521586   65923 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:09:33.521876   65923 kubeadm.go:309] [mark-control-plane] Marking the node auto-474762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:09:33.540142   65923 kubeadm.go:309] [bootstrap-token] Using token: 2zh02c.yrwyjmjxhqzzssxm
	I0421 20:09:33.541532   65923 out.go:204]   - Configuring RBAC rules ...
	I0421 20:09:33.541689   65923 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:09:33.550681   65923 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:09:33.565677   65923 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:09:33.573114   65923 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:09:33.581105   65923 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:09:33.586223   65923 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:09:33.827827   65923 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:09:34.292145   65923 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:09:34.822181   65923 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:09:34.822216   65923 kubeadm.go:309] 
	I0421 20:09:34.822291   65923 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:09:34.822301   65923 kubeadm.go:309] 
	I0421 20:09:34.822411   65923 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:09:34.822432   65923 kubeadm.go:309] 
	I0421 20:09:34.822516   65923 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:09:34.822608   65923 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:09:34.822697   65923 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:09:34.822714   65923 kubeadm.go:309] 
	I0421 20:09:34.822789   65923 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:09:34.822800   65923 kubeadm.go:309] 
	I0421 20:09:34.822866   65923 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:09:34.822875   65923 kubeadm.go:309] 
	I0421 20:09:34.822944   65923 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:09:34.823076   65923 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:09:34.823175   65923 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:09:34.823189   65923 kubeadm.go:309] 
	I0421 20:09:34.823313   65923 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:09:34.823427   65923 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:09:34.823436   65923 kubeadm.go:309] 
	I0421 20:09:34.823552   65923 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2zh02c.yrwyjmjxhqzzssxm \
	I0421 20:09:34.823688   65923 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 20:09:34.823727   65923 kubeadm.go:309] 	--control-plane 
	I0421 20:09:34.823733   65923 kubeadm.go:309] 
	I0421 20:09:34.823833   65923 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:09:34.823843   65923 kubeadm.go:309] 
	I0421 20:09:34.823933   65923 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2zh02c.yrwyjmjxhqzzssxm \
	I0421 20:09:34.824098   65923 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 20:09:34.824243   65923 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:09:34.824257   65923 cni.go:84] Creating CNI manager for ""
	I0421 20:09:34.824265   65923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:09:34.825978   65923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:09:35.150433   66415 main.go:141] libmachine: (kindnet-474762) DBG | domain kindnet-474762 has defined MAC address 52:54:00:2e:26:fd in network mk-kindnet-474762
	I0421 20:09:35.150841   66415 main.go:141] libmachine: (kindnet-474762) DBG | unable to find current IP address of domain kindnet-474762 in network mk-kindnet-474762
	I0421 20:09:35.150865   66415 main.go:141] libmachine: (kindnet-474762) DBG | I0421 20:09:35.150796   66438 retry.go:31] will retry after 4.309204867s: waiting for machine to come up
	I0421 20:09:34.827220   65923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:09:34.840044   65923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:09:34.860809   65923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:09:34.860889   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:34.860901   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-474762 minikube.k8s.io/updated_at=2024_04_21T20_09_34_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=auto-474762 minikube.k8s.io/primary=true
	I0421 20:09:34.905427   65923 ops.go:34] apiserver oom_adj: -16
	I0421 20:09:35.034936   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:35.535269   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:36.035038   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:36.535840   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:37.035316   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:37.535954   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:38.035746   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:38.535046   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:09:39.035837   65923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.490364372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730182490330012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78489bb5-202b-4639-af33-8ed70afd00b1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.491585290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=695eb64c-69a6-4459-8972-53c2cdaa2d85 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.491720099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=695eb64c-69a6-4459-8972-53c2cdaa2d85 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.491998119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=695eb64c-69a6-4459-8972-53c2cdaa2d85 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.538369400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dce74f9b-beaa-47c7-9e3b-fb1e20912d76 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.538477146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dce74f9b-beaa-47c7-9e3b-fb1e20912d76 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.540869841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e744bcd-1e48-491a-bb5c-af1cb9fa15f2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.541361331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730182541337862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e744bcd-1e48-491a-bb5c-af1cb9fa15f2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.542216431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38cc9644-ad3f-4576-bc19-382e8f937f99 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.542264192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38cc9644-ad3f-4576-bc19-382e8f937f99 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.542439932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38cc9644-ad3f-4576-bc19-382e8f937f99 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.587457404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb338ea6-a15e-46d6-bf00-d2587088ccc9 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.587530087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb338ea6-a15e-46d6-bf00-d2587088ccc9 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.588830490Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42fa4f16-29f5-4009-ac22-5c74fe42d40d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.589441352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730182589416768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42fa4f16-29f5-4009-ac22-5c74fe42d40d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.589939693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfa102ce-779f-4ce1-986b-9a73cdfc5dff name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.589987266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfa102ce-779f-4ce1-986b-9a73cdfc5dff name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.590268396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfa102ce-779f-4ce1-986b-9a73cdfc5dff name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.638451695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51c7fdce-89ea-4e55-9a5e-0f97f1b624af name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.638532375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51c7fdce-89ea-4e55-9a5e-0f97f1b624af name=/runtime.v1.RuntimeService/Version
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.639577001Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09add00f-cad8-4349-9e35-c06a1489db27 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.640004161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730182639981409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09add00f-cad8-4349-9e35-c06a1489db27 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.640587929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=415dd064-d8d5-4492-b522-a42284b6aedd name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.640687254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=415dd064-d8d5-4492-b522-a42284b6aedd name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:09:42 default-k8s-diff-port-167454 crio[726]: time="2024-04-21 20:09:42.640865710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb,PodSandboxId:f564ef62c5d36e9babc73f37c6b760130c58e82ca4cb1fc804b28bdf87bbe64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729322043899808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59527419-6bed-43ec-afa1-30d8abbbfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 590d89ea,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6,PodSandboxId:3fe17f468b22e646047441e3c96d3545607eb76d8e0bb5510c031bf5aefa6e63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321103520772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lbtcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0a091d-255b-4d65-81b5-5324a00de777,},Annotations:map[string]string{io.kubernetes.container.hash: f423fbcb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7,PodSandboxId:e490374db989d6574c063cdff7fd668fea986460208e2c7069fb4796bf457ae9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729321016840786,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xmhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3dbf5552-a097-4fb9-99ac-9119d3b8b4c7,},Annotations:map[string]string{io.kubernetes.container.hash: 21cb97f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928,PodSandboxId:4bd10da45563fd2c14f82bdf1e189af7af9c95c2c8320ba861f0c00133047dd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713729319740843699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fe99c0-e9b4-4267-a849-e5de2e9b4e21,},Annotations:map[string]string{io.kubernetes.container.hash: a766abd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37,PodSandboxId:55ef630000fd7cb025abbf22c190170cd4364740f83417fcdf9311eab010e7d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171372930071679776
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840d39f9a1e76cc630bf132ff27e82d,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110,PodSandboxId:457d67ee31ff2b4ffd3d833a6da3d250c4a778c611fae78327a590ddabdda2f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729300696285595,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7421b891daecde11c220ede304fd7e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 89cbd978,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5,PodSandboxId:1d51cf2e60f26f8e54f66178498d0d40e84283da93c7738e3efa6658d21351a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729300634910396,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc6b4af45f2ec88bbdf0c499c541063,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18,PodSandboxId:1806e64fe49f158abac304224f78f784ec334e3d5d52f673180932ded39595c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729300560918172,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-167454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698e4fbda5740b8d2cce358fde8d9931,},Annotations:map[string]string{io.kubernetes.container.hash: 9d3b715d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=415dd064-d8d5-4492-b522-a42284b6aedd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34c9445657c0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   f564ef62c5d36       storage-provisioner
	bf807fae6eb29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   3fe17f468b22e       coredns-7db6d8ff4d-lbtcm
	bd3a5c5cb97eb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   e490374db989d       coredns-7db6d8ff4d-xmhm6
	1b52f85f70be5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   14 minutes ago      Running             kube-proxy                0                   4bd10da45563f       kube-proxy-wmv4v
	9a048c9824374       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   14 minutes ago      Running             kube-scheduler            2                   55ef630000fd7       kube-scheduler-default-k8s-diff-port-167454
	7242f34bc2713       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   457d67ee31ff2       etcd-default-k8s-diff-port-167454
	ae1315d3ba927       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   14 minutes ago      Running             kube-controller-manager   2                   1d51cf2e60f26       kube-controller-manager-default-k8s-diff-port-167454
	b19255e9ba536       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   14 minutes ago      Running             kube-apiserver            2                   1806e64fe49f1       kube-apiserver-default-k8s-diff-port-167454
	
	
	==> coredns [bd3a5c5cb97eb549af24f64d20a4f61c04b8913754de114492da8e74a8fd18b7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bf807fae6eb298aa39d97c35394f6f4f60b18d5ce297e33603e2dd47ac0f51a6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-167454
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-167454
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=default-k8s-diff-port-167454
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_55_07_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:55:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-167454
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:09:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:05:40 +0000   Sun, 21 Apr 2024 19:55:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:05:40 +0000   Sun, 21 Apr 2024 19:55:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:05:40 +0000   Sun, 21 Apr 2024 19:55:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:05:40 +0000   Sun, 21 Apr 2024 19:55:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.23
	  Hostname:    default-k8s-diff-port-167454
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 967637a2b8bd47528fa6b40636da4a88
	  System UUID:                967637a2-b8bd-4752-8fa6-b40636da4a88
	  Boot ID:                    c12dc575-9a3c-4272-a89d-76f3bb51232a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lbtcm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-xmhm6                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-167454                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-default-k8s-diff-port-167454             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-167454    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-wmv4v                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-167454             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-55czz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node default-k8s-diff-port-167454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node default-k8s-diff-port-167454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node default-k8s-diff-port-167454 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node default-k8s-diff-port-167454 event: Registered Node default-k8s-diff-port-167454 in Controller
	
	
	==> dmesg <==
	[  +0.043455] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.957896] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.650763] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.780004] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr21 19:50] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.063734] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068233] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.191799] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.166148] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.330131] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +5.314561] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.069862] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.551101] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +5.626188] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.334779] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.613283] kauditd_printk_skb: 27 callbacks suppressed
	[Apr21 19:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.851109] systemd-fstab-generator[3622]: Ignoring "noauto" option for root device
	[Apr21 19:55] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.075148] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[ +13.599176] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.354561] systemd-fstab-generator[4215]: Ignoring "noauto" option for root device
	[Apr21 19:56] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7242f34bc2713d801ce2d0495540e6032dbd333ade092bccc8c541e17354a110] <==
	{"level":"info","ts":"2024-04-21T19:55:01.871892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-21T19:55:01.872037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-21T19:55:01.872143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad received MsgPreVoteResp from d4daad8799328bad at term 1"}
	{"level":"info","ts":"2024-04-21T19:55:01.872176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad became candidate at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.8722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad received MsgVoteResp from d4daad8799328bad at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.872227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4daad8799328bad became leader at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.872258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4daad8799328bad elected leader d4daad8799328bad at term 2"}
	{"level":"info","ts":"2024-04-21T19:55:01.877256Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.880135Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4daad8799328bad","local-member-attributes":"{Name:default-k8s-diff-port-167454 ClientURLs:[https://192.168.61.23:2379]}","request-path":"/0/members/d4daad8799328bad/attributes","cluster-id":"9a2bb6132dcffac6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T19:55:01.880288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:55:01.886689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T19:55:01.889561Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9a2bb6132dcffac6","local-member-id":"d4daad8799328bad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.922821Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.889584Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T19:55:01.896114Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T19:55:01.944563Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T19:55:01.94474Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T19:55:01.951696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.23:2379"}
	{"level":"info","ts":"2024-04-21T20:00:03.299858Z","caller":"traceutil/trace.go:171","msg":"trace[132640198] transaction","detail":"{read_only:false; response_revision:678; number_of_response:1; }","duration":"220.761303ms","start":"2024-04-21T20:00:03.079046Z","end":"2024-04-21T20:00:03.299807Z","steps":["trace[132640198] 'process raft request'  (duration: 220.505565ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:05:01.944376Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-04-21T20:05:01.955958Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":677,"took":"11.102853ms","hash":1562291772,"current-db-size-bytes":2383872,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2383872,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-21T20:05:01.956143Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1562291772,"revision":677,"compact-revision":-1}
	{"level":"info","ts":"2024-04-21T20:09:07.326012Z","caller":"traceutil/trace.go:171","msg":"trace[1200601935] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"195.945101ms","start":"2024-04-21T20:09:07.130024Z","end":"2024-04-21T20:09:07.325969Z","steps":["trace[1200601935] 'process raft request'  (duration: 195.803146ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:09:23.788412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.455399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-21T20:09:23.788904Z","caller":"traceutil/trace.go:171","msg":"trace[468028387] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1132; }","duration":"105.029485ms","start":"2024-04-21T20:09:23.683841Z","end":"2024-04-21T20:09:23.78887Z","steps":["trace[468028387] 'count revisions from in-memory index tree'  (duration: 104.397567ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:09:43 up 19 min,  0 users,  load average: 0.68, 0.35, 0.23
	Linux default-k8s-diff-port-167454 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b19255e9ba536a81b7f81bd8eac416c14599a32cc6d9fb766fcaae4ded17af18] <==
	I0421 20:03:04.426987       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:05:03.431330       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:05:03.431719       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0421 20:05:04.432300       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:05:04.432448       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:05:04.432486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:05:04.432319       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:05:04.432612       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:05:04.434639       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:06:04.433130       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:06:04.433209       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:06:04.433220       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:06:04.435592       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:06:04.435724       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:06:04.435761       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:08:04.433441       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:08:04.433612       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:08:04.433622       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:08:04.435911       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:08:04.436037       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:08:04.436044       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae1315d3ba9270948f5736f2180af9d0d26140c3469ac4563d035f0114f3dfa5] <==
	I0421 20:03:49.471491       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:04:18.990246       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:04:19.481996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:04:48.995654       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:04:49.490247       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:05:19.003273       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:05:19.504915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:05:49.009937       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:05:49.514244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:06:19.015285       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:06:19.522310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:06:25.468824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="83.579µs"
	I0421 20:06:40.461572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="54.015µs"
	E0421 20:06:49.020888       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:06:49.531156       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:07:19.029778       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:07:19.539960       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:07:49.035323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:07:49.548756       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:08:19.040495       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:08:19.557333       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:08:49.051793       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:08:49.568980       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:09:19.063889       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:09:19.585587       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1b52f85f70be55903aa2050e39bddb2b5a7a67b456b53fedc3085fb8a01b4928] <==
	I0421 19:55:20.107566       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:55:20.138623       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.23"]
	I0421 19:55:20.341122       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:55:20.341174       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:55:20.341192       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:55:20.352265       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:55:20.352455       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:55:20.352471       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:55:20.354020       1 config.go:192] "Starting service config controller"
	I0421 19:55:20.354132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:55:20.354222       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:55:20.354229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:55:20.354549       1 config.go:319] "Starting node config controller"
	I0421 19:55:20.354555       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:55:20.455028       1 shared_informer.go:320] Caches are synced for node config
	I0421 19:55:20.455159       1 shared_informer.go:320] Caches are synced for service config
	I0421 19:55:20.455197       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9a048c98243746d961d11cccb010e8c50630793fe754bf5d17514bec85a29b37] <==
	W0421 19:55:04.488334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 19:55:04.488590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 19:55:04.510362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:55:04.510428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:55:04.510484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:55:04.510526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:55:04.561508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:55:04.561613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 19:55:04.695751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 19:55:04.695810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 19:55:04.705347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 19:55:04.705401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 19:55:04.784284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 19:55:04.784391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 19:55:04.818722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 19:55:04.818786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 19:55:04.831349       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:55:04.832148       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:55:04.841590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 19:55:04.841726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 19:55:04.851870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 19:55:04.852018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:55:04.863378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 19:55:04.863529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0421 19:55:07.131694       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:07:06 default-k8s-diff-port-167454 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:07:06 default-k8s-diff-port-167454 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:07:06 default-k8s-diff-port-167454 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:07:17 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:07:17.441803    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:07:30 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:07:30.441299    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:07:44 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:07:44.443690    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:07:56 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:07:56.440819    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:08:06 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:08:06.488973    3952 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:08:06 default-k8s-diff-port-167454 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:08:06 default-k8s-diff-port-167454 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:08:06 default-k8s-diff-port-167454 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:08:06 default-k8s-diff-port-167454 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:08:10 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:08:10.444400    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:08:23 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:08:23.442017    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:08:38 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:08:38.441567    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:08:49 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:08:49.440862    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:09:01 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:09:01.442847    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:09:06 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:09:06.492742    3952 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:09:06 default-k8s-diff-port-167454 kubelet[3952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:09:06 default-k8s-diff-port-167454 kubelet[3952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:09:06 default-k8s-diff-port-167454 kubelet[3952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:09:06 default-k8s-diff-port-167454 kubelet[3952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:09:12 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:09:12.442725    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:09:23 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:09:23.441786    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	Apr 21 20:09:37 default-k8s-diff-port-167454 kubelet[3952]: E0421 20:09:37.440764    3952 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-55czz" podUID="9bd6c32b-2526-40c9-8096-fb9fef26e927"
	
	
	==> storage-provisioner [34c9445657c0ca1f19792491cb475d1143d53e97c03c226de5acdb9e17790ecb] <==
	I0421 19:55:22.174806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 19:55:22.196796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 19:55:22.196864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 19:55:22.213435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff2f4d85-462c-45eb-b00e-b06214698f91", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-167454_57532d66-9eeb-40d6-bd5e-439b95854ee7 became leader
	I0421 19:55:22.215758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 19:55:22.217801       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167454_57532d66-9eeb-40d6-bd5e-439b95854ee7!
	I0421 19:55:22.319147       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-167454_57532d66-9eeb-40d6-bd5e-439b95854ee7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-55czz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 describe pod metrics-server-569cc877fc-55czz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-167454 describe pod metrics-server-569cc877fc-55czz: exit status 1 (66.045831ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-55czz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-167454 describe pod metrics-server-569cc877fc-55czz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (317.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-727235 -n embed-certs-727235
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-21 20:14:22.951877133 +0000 UTC m=+6771.476012571
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-727235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-727235 logs -n 25: (1.394342822s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo cat                           | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo cat                           | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo cat                           | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo docker                        | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo cat                           | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo cat                           | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo cat                           | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo cat                           | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo                               | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo find                          | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-474762 sudo crio                          | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p flannel-474762                                    | flannel-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	| ssh     | -p bridge-474762 pgrep -a                            | bridge-474762  | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 20:12:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 20:12:21.326743   73732 out.go:291] Setting OutFile to fd 1 ...
	I0421 20:12:21.326859   73732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:12:21.326870   73732 out.go:304] Setting ErrFile to fd 2...
	I0421 20:12:21.326877   73732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:12:21.327116   73732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 20:12:21.327755   73732 out.go:298] Setting JSON to false
	I0421 20:12:21.328878   73732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6839,"bootTime":1713723502,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 20:12:21.328942   73732 start.go:139] virtualization: kvm guest
	I0421 20:12:21.331330   73732 out.go:177] * [bridge-474762] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 20:12:21.332945   73732 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 20:12:21.334414   73732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 20:12:21.332963   73732 notify.go:220] Checking for updates...
	I0421 20:12:21.335865   73732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:12:21.337290   73732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:12:21.338693   73732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 20:12:21.340049   73732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 20:12:21.341849   73732 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:21.341955   73732 config.go:182] Loaded profile config "enable-default-cni-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:21.342044   73732 config.go:182] Loaded profile config "flannel-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:21.342163   73732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 20:12:21.379252   73732 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 20:12:21.380597   73732 start.go:297] selected driver: kvm2
	I0421 20:12:21.380609   73732 start.go:901] validating driver "kvm2" against <nil>
	I0421 20:12:21.380620   73732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 20:12:21.381311   73732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:12:21.381386   73732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 20:12:21.397623   73732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 20:12:21.397665   73732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 20:12:21.397859   73732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:12:21.397917   73732 cni.go:84] Creating CNI manager for "bridge"
	I0421 20:12:21.397926   73732 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 20:12:21.397972   73732 start.go:340] cluster config:
	{Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:12:21.398084   73732 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:12:21.399858   73732 out.go:177] * Starting "bridge-474762" primary control-plane node in "bridge-474762" cluster
	I0421 20:12:18.798121   70482 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:12:18.815066   70482 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:12:18.838098   70482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:12:18.838185   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:18.838197   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-474762 minikube.k8s.io/updated_at=2024_04_21T20_12_18_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=enable-default-cni-474762 minikube.k8s.io/primary=true
	I0421 20:12:19.035190   70482 ops.go:34] apiserver oom_adj: -16
	I0421 20:12:19.035322   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:19.535436   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:20.035658   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:20.535758   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:21.035557   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:21.535511   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:22.036413   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:18.379337   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:18.379897   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find current IP address of domain flannel-474762 in network mk-flannel-474762
	I0421 20:12:18.379924   72192 main.go:141] libmachine: (flannel-474762) DBG | I0421 20:12:18.379852   72233 retry.go:31] will retry after 3.592579622s: waiting for machine to come up
	I0421 20:12:21.975794   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:21.976255   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find current IP address of domain flannel-474762 in network mk-flannel-474762
	I0421 20:12:21.976292   72192 main.go:141] libmachine: (flannel-474762) DBG | I0421 20:12:21.976221   72233 retry.go:31] will retry after 3.496699336s: waiting for machine to come up
	I0421 20:12:21.401243   73732 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:12:21.401273   73732 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 20:12:21.401280   73732 cache.go:56] Caching tarball of preloaded images
	I0421 20:12:21.401345   73732 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 20:12:21.401355   73732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 20:12:21.401431   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/config.json ...
	I0421 20:12:21.401446   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/config.json: {Name:mk0694007987d491726509cb12151f8bc7d2b0cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:21.401552   73732 start.go:360] acquireMachinesLock for bridge-474762: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:12:22.536125   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:23.036360   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:23.535934   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:24.035889   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:24.536064   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:25.035799   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:25.536246   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:26.036020   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:26.535751   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:27.035539   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:25.474014   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:25.474526   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find current IP address of domain flannel-474762 in network mk-flannel-474762
	I0421 20:12:25.474552   72192 main.go:141] libmachine: (flannel-474762) DBG | I0421 20:12:25.474496   72233 retry.go:31] will retry after 5.979097526s: waiting for machine to come up
	I0421 20:12:27.536115   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:28.035647   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:28.535807   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:29.035500   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:29.536266   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:30.035918   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:30.535542   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:31.036242   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:31.536122   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:32.035424   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:32.202448   70482 kubeadm.go:1107] duration metric: took 13.364343795s to wait for elevateKubeSystemPrivileges
	W0421 20:12:32.202492   70482 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:12:32.202502   70482 kubeadm.go:393] duration metric: took 26.040925967s to StartCluster
	I0421 20:12:32.202525   70482 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:32.202596   70482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:12:32.204550   70482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:32.204847   70482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 20:12:32.204862   70482 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:12:32.204930   70482 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-474762"
	I0421 20:12:32.204948   70482 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-474762"
	I0421 20:12:32.204964   70482 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-474762"
	I0421 20:12:32.204990   70482 host.go:66] Checking if "enable-default-cni-474762" exists ...
	I0421 20:12:32.204996   70482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-474762"
	I0421 20:12:32.205031   70482 config.go:182] Loaded profile config "enable-default-cni-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:32.204840   70482 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:12:32.207108   70482 out.go:177] * Verifying Kubernetes components...
	I0421 20:12:32.205471   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.207157   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.205487   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.207192   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.208722   70482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:12:32.223460   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0421 20:12:32.224078   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.224647   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.224671   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.225042   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.225858   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.225891   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.227469   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0421 20:12:32.228199   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.228857   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.228882   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.229351   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.229571   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetState
	I0421 20:12:32.233549   70482 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-474762"
	I0421 20:12:32.233597   70482 host.go:66] Checking if "enable-default-cni-474762" exists ...
	I0421 20:12:32.234025   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.234047   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.243026   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0421 20:12:32.243480   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.244592   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.244609   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.244996   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.245274   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetState
	I0421 20:12:32.247036   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .DriverName
	I0421 20:12:32.248978   70482 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:12:31.456651   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.457178   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has current primary IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.457199   72192 main.go:141] libmachine: (flannel-474762) Found IP for machine: 192.168.61.193
	I0421 20:12:31.457211   72192 main.go:141] libmachine: (flannel-474762) Reserving static IP address...
	I0421 20:12:31.457533   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find host DHCP lease matching {name: "flannel-474762", mac: "52:54:00:e5:f0:3c", ip: "192.168.61.193"} in network mk-flannel-474762
	I0421 20:12:31.534817   72192 main.go:141] libmachine: (flannel-474762) DBG | Getting to WaitForSSH function...
	I0421 20:12:31.534847   72192 main.go:141] libmachine: (flannel-474762) Reserved static IP address: 192.168.61.193
	I0421 20:12:31.534860   72192 main.go:141] libmachine: (flannel-474762) Waiting for SSH to be available...
	I0421 20:12:31.537540   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.537967   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.537996   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.538131   72192 main.go:141] libmachine: (flannel-474762) DBG | Using SSH client type: external
	I0421 20:12:31.538156   72192 main.go:141] libmachine: (flannel-474762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa (-rw-------)
	I0421 20:12:31.538198   72192 main.go:141] libmachine: (flannel-474762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 20:12:31.538216   72192 main.go:141] libmachine: (flannel-474762) DBG | About to run SSH command:
	I0421 20:12:31.538232   72192 main.go:141] libmachine: (flannel-474762) DBG | exit 0
	I0421 20:12:31.670677   72192 main.go:141] libmachine: (flannel-474762) DBG | SSH cmd err, output: <nil>: 
	I0421 20:12:31.670993   72192 main.go:141] libmachine: (flannel-474762) KVM machine creation complete!
	I0421 20:12:31.671348   72192 main.go:141] libmachine: (flannel-474762) Calling .GetConfigRaw
	I0421 20:12:31.671903   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:31.672101   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:31.672308   72192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 20:12:31.672337   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:12:31.674018   72192 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 20:12:31.674037   72192 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 20:12:31.674045   72192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 20:12:31.674054   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:31.676634   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.677065   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.677101   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.677224   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:31.677426   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.677581   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.677727   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:31.677933   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:31.678206   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:31.678222   72192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 20:12:31.790033   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:31.790084   72192 main.go:141] libmachine: Detecting the provisioner...
	I0421 20:12:31.790094   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:31.792728   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.793156   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.793183   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.793366   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:31.793557   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.793721   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.793854   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:31.793994   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:31.794260   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:31.794279   72192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 20:12:31.907518   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 20:12:31.907609   72192 main.go:141] libmachine: found compatible host: buildroot
	I0421 20:12:31.907632   72192 main.go:141] libmachine: Provisioning with buildroot...
	I0421 20:12:31.907646   72192 main.go:141] libmachine: (flannel-474762) Calling .GetMachineName
	I0421 20:12:31.907914   72192 buildroot.go:166] provisioning hostname "flannel-474762"
	I0421 20:12:31.907944   72192 main.go:141] libmachine: (flannel-474762) Calling .GetMachineName
	I0421 20:12:31.908067   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:31.910582   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.910924   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.910961   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.911089   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:31.911282   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.911457   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.911628   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:31.911821   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:31.911995   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:31.912008   72192 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-474762 && echo "flannel-474762" | sudo tee /etc/hostname
	I0421 20:12:32.046907   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-474762
	
	I0421 20:12:32.046936   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.050349   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.050687   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.050716   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.050949   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:32.051142   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.051311   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.051538   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:32.051760   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:32.051971   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:32.051994   72192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-474762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-474762/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-474762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:12:32.187456   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:32.187486   72192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 20:12:32.187540   72192 buildroot.go:174] setting up certificates
	I0421 20:12:32.187555   72192 provision.go:84] configureAuth start
	I0421 20:12:32.187575   72192 main.go:141] libmachine: (flannel-474762) Calling .GetMachineName
	I0421 20:12:32.187920   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:32.190703   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.191093   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.191123   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.191264   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.193823   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.194130   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.194156   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.194326   72192 provision.go:143] copyHostCerts
	I0421 20:12:32.194388   72192 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 20:12:32.194400   72192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 20:12:32.194484   72192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 20:12:32.194622   72192 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 20:12:32.194636   72192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 20:12:32.194676   72192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 20:12:32.194754   72192 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 20:12:32.194766   72192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 20:12:32.194800   72192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 20:12:32.194919   72192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.flannel-474762 san=[127.0.0.1 192.168.61.193 flannel-474762 localhost minikube]
	I0421 20:12:32.607939   72192 provision.go:177] copyRemoteCerts
	I0421 20:12:32.607991   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:12:32.608017   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.610847   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.611192   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.611245   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.611384   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:32.611573   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.611776   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:32.611927   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.525203   73732 start.go:364] duration metric: took 12.123630486s to acquireMachinesLock for "bridge-474762"
	I0421 20:12:33.525276   73732 start.go:93] Provisioning new machine with config: &{Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:bridge-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:12:33.525458   73732 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 20:12:32.251335   70482 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:12:32.251356   70482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:12:32.251376   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHHostname
	I0421 20:12:32.254886   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.255244   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0421 20:12:32.255433   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:94:25", ip: ""} in network mk-enable-default-cni-474762: {Iface:virbr1 ExpiryTime:2024-04-21 21:11:50 +0000 UTC Type:0 Mac:52:54:00:3e:94:25 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:enable-default-cni-474762 Clientid:01:52:54:00:3e:94:25}
	I0421 20:12:32.255448   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined IP address 192.168.39.147 and MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.255605   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.255692   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHPort
	I0421 20:12:32.255837   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.256262   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.257209   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.257436   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHUsername
	I0421 20:12:32.257586   70482 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/enable-default-cni-474762/id_rsa Username:docker}
	I0421 20:12:32.257720   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.258365   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.258386   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.274088   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I0421 20:12:32.274737   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.275421   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.275442   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.275895   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.276074   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetState
	I0421 20:12:32.277850   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .DriverName
	I0421 20:12:32.278647   70482 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:12:32.278662   70482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:12:32.278680   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHHostname
	I0421 20:12:32.282461   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.282843   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:94:25", ip: ""} in network mk-enable-default-cni-474762: {Iface:virbr1 ExpiryTime:2024-04-21 21:11:50 +0000 UTC Type:0 Mac:52:54:00:3e:94:25 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:enable-default-cni-474762 Clientid:01:52:54:00:3e:94:25}
	I0421 20:12:32.282864   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined IP address 192.168.39.147 and MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.283085   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHPort
	I0421 20:12:32.283299   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.283476   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHUsername
	I0421 20:12:32.283653   70482 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/enable-default-cni-474762/id_rsa Username:docker}
	I0421 20:12:32.526452   70482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:12:32.526647   70482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 20:12:32.563628   70482 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-474762" to be "Ready" ...
	I0421 20:12:32.606472   70482 node_ready.go:49] node "enable-default-cni-474762" has status "Ready":"True"
	I0421 20:12:32.606496   70482 node_ready.go:38] duration metric: took 42.82555ms for node "enable-default-cni-474762" to be "Ready" ...
	I0421 20:12:32.606508   70482 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:12:32.635956   70482 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace to be "Ready" ...
	I0421 20:12:32.708739   70482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:12:32.796406   70482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:12:33.479655   70482 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0421 20:12:33.479745   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:33.479775   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:33.480076   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | Closing plugin on server side
	I0421 20:12:33.480128   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:33.480136   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:33.480144   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:33.480152   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:33.480368   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:33.480384   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:33.480406   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | Closing plugin on server side
	I0421 20:12:33.503955   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:33.503978   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:33.504295   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:33.504310   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:34.049278   70482 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-474762" context rescaled to 1 replicas
	I0421 20:12:34.241503   70482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.445052637s)
	I0421 20:12:34.241559   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:34.241573   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:34.241823   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:34.241837   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:34.241847   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:34.241854   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:34.242160   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:34.242174   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:34.244044   70482 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0421 20:12:32.704322   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:12:32.733332   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0421 20:12:32.760670   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 20:12:32.791659   72192 provision.go:87] duration metric: took 604.087927ms to configureAuth
	I0421 20:12:32.791686   72192 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:12:32.791888   72192 config.go:182] Loaded profile config "flannel-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:32.791954   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.795174   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.795609   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.795652   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.795817   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:32.796014   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.796183   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.796304   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:32.796465   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:32.796689   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:32.796712   72192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 20:12:33.124311   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 20:12:33.124340   72192 main.go:141] libmachine: Checking connection to Docker...
	I0421 20:12:33.124350   72192 main.go:141] libmachine: (flannel-474762) Calling .GetURL
	I0421 20:12:33.125711   72192 main.go:141] libmachine: (flannel-474762) DBG | Using libvirt version 6000000
	I0421 20:12:33.128253   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.128646   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.128679   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.128821   72192 main.go:141] libmachine: Docker is up and running!
	I0421 20:12:33.128839   72192 main.go:141] libmachine: Reticulating splines...
	I0421 20:12:33.128863   72192 client.go:171] duration metric: took 30.396087778s to LocalClient.Create
	I0421 20:12:33.128886   72192 start.go:167] duration metric: took 30.396167257s to libmachine.API.Create "flannel-474762"
	I0421 20:12:33.128898   72192 start.go:293] postStartSetup for "flannel-474762" (driver="kvm2")
	I0421 20:12:33.128911   72192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:12:33.128933   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.129232   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:12:33.129261   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.132028   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.132312   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.132344   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.132547   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.132751   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.132907   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.133083   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.230023   72192 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:12:33.236698   72192 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:12:33.236723   72192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 20:12:33.236797   72192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 20:12:33.236925   72192 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 20:12:33.237036   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:12:33.248554   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:12:33.288956   72192 start.go:296] duration metric: took 160.043514ms for postStartSetup
	I0421 20:12:33.289010   72192 main.go:141] libmachine: (flannel-474762) Calling .GetConfigRaw
	I0421 20:12:33.328895   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:33.332130   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.332650   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.332672   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.333159   72192 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/config.json ...
	I0421 20:12:33.400280   72192 start.go:128] duration metric: took 30.687751144s to createHost
	I0421 20:12:33.400327   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.403706   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.404160   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.404183   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.404364   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.404635   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.404827   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.404995   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.405124   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:33.405345   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:33.405357   72192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:12:33.525039   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713730353.513946423
	
	I0421 20:12:33.525061   72192 fix.go:216] guest clock: 1713730353.513946423
	I0421 20:12:33.525070   72192 fix.go:229] Guest: 2024-04-21 20:12:33.513946423 +0000 UTC Remote: 2024-04-21 20:12:33.400309273 +0000 UTC m=+30.821670180 (delta=113.63715ms)
	I0421 20:12:33.525095   72192 fix.go:200] guest clock delta is within tolerance: 113.63715ms
	I0421 20:12:33.525102   72192 start.go:83] releasing machines lock for "flannel-474762", held for 30.812680837s
	I0421 20:12:33.525133   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.525440   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:33.528379   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.528767   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.528838   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.528954   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.529557   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.529770   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.529915   72192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:12:33.529957   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.529980   72192 ssh_runner.go:195] Run: cat /version.json
	I0421 20:12:33.530018   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.533011   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533224   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533415   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.533447   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533606   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.533746   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.533781   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533804   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.533942   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.534111   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.534190   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.534376   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.534390   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.534532   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.657019   72192 ssh_runner.go:195] Run: systemctl --version
	I0421 20:12:33.669786   72192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 20:12:34.137913   72192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 20:12:34.145889   72192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:12:34.145954   72192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:12:34.171202   72192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:12:34.171236   72192 start.go:494] detecting cgroup driver to use...
	I0421 20:12:34.171293   72192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:12:34.197538   72192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:12:34.219387   72192 docker.go:217] disabling cri-docker service (if available) ...
	I0421 20:12:34.219456   72192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 20:12:34.240560   72192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 20:12:34.262374   72192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 20:12:34.423302   72192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 20:12:34.613903   72192 docker.go:233] disabling docker service ...
	I0421 20:12:34.613975   72192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 20:12:34.636521   72192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 20:12:34.656037   72192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 20:12:34.801762   72192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 20:12:34.979812   72192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 20:12:35.002207   72192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:12:35.030369   72192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 20:12:35.030445   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.047623   72192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 20:12:35.047734   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.079619   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.093458   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.108610   72192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:12:35.122721   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.135454   72192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.157606   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.170333   72192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:12:35.180812   72192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 20:12:35.180879   72192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 20:12:35.195621   72192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:12:35.208289   72192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:12:35.366682   72192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 20:12:35.543529   72192 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 20:12:35.543594   72192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 20:12:35.549125   72192 start.go:562] Will wait 60s for crictl version
	I0421 20:12:35.549183   72192 ssh_runner.go:195] Run: which crictl
	I0421 20:12:35.553983   72192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:12:35.597517   72192 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 20:12:35.597620   72192 ssh_runner.go:195] Run: crio --version
	I0421 20:12:35.633341   72192 ssh_runner.go:195] Run: crio --version
	I0421 20:12:35.670906   72192 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 20:12:33.537690   73732 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0421 20:12:33.537933   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:33.537991   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:33.553837   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0421 20:12:33.554554   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:33.558401   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:12:33.558432   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:33.559772   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:33.560002   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:33.560172   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:12:33.560360   73732 start.go:159] libmachine.API.Create for "bridge-474762" (driver="kvm2")
	I0421 20:12:33.560387   73732 client.go:168] LocalClient.Create starting
	I0421 20:12:33.560427   73732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 20:12:33.560471   73732 main.go:141] libmachine: Decoding PEM data...
	I0421 20:12:33.560489   73732 main.go:141] libmachine: Parsing certificate...
	I0421 20:12:33.560569   73732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 20:12:33.560604   73732 main.go:141] libmachine: Decoding PEM data...
	I0421 20:12:33.560625   73732 main.go:141] libmachine: Parsing certificate...
	I0421 20:12:33.560671   73732 main.go:141] libmachine: Running pre-create checks...
	I0421 20:12:33.560688   73732 main.go:141] libmachine: (bridge-474762) Calling .PreCreateCheck
	I0421 20:12:33.561223   73732 main.go:141] libmachine: (bridge-474762) Calling .GetConfigRaw
	I0421 20:12:33.602748   73732 main.go:141] libmachine: Creating machine...
	I0421 20:12:33.602778   73732 main.go:141] libmachine: (bridge-474762) Calling .Create
	I0421 20:12:33.603098   73732 main.go:141] libmachine: (bridge-474762) Creating KVM machine...
	I0421 20:12:33.604441   73732 main.go:141] libmachine: (bridge-474762) DBG | found existing default KVM network
	I0421 20:12:33.605658   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:33.605477   73861 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:3b:8c} reservation:<nil>}
	I0421 20:12:33.606898   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:33.606789   73861 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001134e0}
	I0421 20:12:33.606927   73732 main.go:141] libmachine: (bridge-474762) DBG | created network xml: 
	I0421 20:12:33.606938   73732 main.go:141] libmachine: (bridge-474762) DBG | <network>
	I0421 20:12:33.606952   73732 main.go:141] libmachine: (bridge-474762) DBG |   <name>mk-bridge-474762</name>
	I0421 20:12:33.606960   73732 main.go:141] libmachine: (bridge-474762) DBG |   <dns enable='no'/>
	I0421 20:12:33.606972   73732 main.go:141] libmachine: (bridge-474762) DBG |   
	I0421 20:12:33.606983   73732 main.go:141] libmachine: (bridge-474762) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0421 20:12:33.606990   73732 main.go:141] libmachine: (bridge-474762) DBG |     <dhcp>
	I0421 20:12:33.607005   73732 main.go:141] libmachine: (bridge-474762) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0421 20:12:33.607025   73732 main.go:141] libmachine: (bridge-474762) DBG |     </dhcp>
	I0421 20:12:33.607037   73732 main.go:141] libmachine: (bridge-474762) DBG |   </ip>
	I0421 20:12:33.607043   73732 main.go:141] libmachine: (bridge-474762) DBG |   
	I0421 20:12:33.607051   73732 main.go:141] libmachine: (bridge-474762) DBG | </network>
	I0421 20:12:33.607059   73732 main.go:141] libmachine: (bridge-474762) DBG | 
	I0421 20:12:33.632680   73732 main.go:141] libmachine: (bridge-474762) DBG | trying to create private KVM network mk-bridge-474762 192.168.50.0/24...
	I0421 20:12:33.722819   73732 main.go:141] libmachine: (bridge-474762) DBG | private KVM network mk-bridge-474762 192.168.50.0/24 created
	I0421 20:12:33.722882   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:33.722728   73861 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:12:33.722916   73732 main.go:141] libmachine: (bridge-474762) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762 ...
	I0421 20:12:33.722941   73732 main.go:141] libmachine: (bridge-474762) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 20:12:33.722961   73732 main.go:141] libmachine: (bridge-474762) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 20:12:34.025437   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:34.025262   73861 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa...
	I0421 20:12:34.129107   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:34.128975   73861 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/bridge-474762.rawdisk...
	I0421 20:12:34.129148   73732 main.go:141] libmachine: (bridge-474762) DBG | Writing magic tar header
	I0421 20:12:34.129164   73732 main.go:141] libmachine: (bridge-474762) DBG | Writing SSH key tar header
	I0421 20:12:34.129177   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:34.129119   73861 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762 ...
	I0421 20:12:34.129252   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762
	I0421 20:12:34.129332   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762 (perms=drwx------)
	I0421 20:12:34.129367   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 20:12:34.129381   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 20:12:34.129395   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 20:12:34.129413   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 20:12:34.129426   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 20:12:34.129442   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 20:12:34.129453   73732 main.go:141] libmachine: (bridge-474762) Creating domain...
	I0421 20:12:34.129486   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:12:34.129516   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 20:12:34.129534   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 20:12:34.129547   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins
	I0421 20:12:34.129564   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home
	I0421 20:12:34.129598   73732 main.go:141] libmachine: (bridge-474762) DBG | Skipping /home - not owner
	I0421 20:12:34.130666   73732 main.go:141] libmachine: (bridge-474762) define libvirt domain using xml: 
	I0421 20:12:34.130688   73732 main.go:141] libmachine: (bridge-474762) <domain type='kvm'>
	I0421 20:12:34.130698   73732 main.go:141] libmachine: (bridge-474762)   <name>bridge-474762</name>
	I0421 20:12:34.130706   73732 main.go:141] libmachine: (bridge-474762)   <memory unit='MiB'>3072</memory>
	I0421 20:12:34.130715   73732 main.go:141] libmachine: (bridge-474762)   <vcpu>2</vcpu>
	I0421 20:12:34.130733   73732 main.go:141] libmachine: (bridge-474762)   <features>
	I0421 20:12:34.130741   73732 main.go:141] libmachine: (bridge-474762)     <acpi/>
	I0421 20:12:34.130747   73732 main.go:141] libmachine: (bridge-474762)     <apic/>
	I0421 20:12:34.130758   73732 main.go:141] libmachine: (bridge-474762)     <pae/>
	I0421 20:12:34.130765   73732 main.go:141] libmachine: (bridge-474762)     
	I0421 20:12:34.130773   73732 main.go:141] libmachine: (bridge-474762)   </features>
	I0421 20:12:34.130800   73732 main.go:141] libmachine: (bridge-474762)   <cpu mode='host-passthrough'>
	I0421 20:12:34.130837   73732 main.go:141] libmachine: (bridge-474762)   
	I0421 20:12:34.130866   73732 main.go:141] libmachine: (bridge-474762)   </cpu>
	I0421 20:12:34.130882   73732 main.go:141] libmachine: (bridge-474762)   <os>
	I0421 20:12:34.130903   73732 main.go:141] libmachine: (bridge-474762)     <type>hvm</type>
	I0421 20:12:34.130922   73732 main.go:141] libmachine: (bridge-474762)     <boot dev='cdrom'/>
	I0421 20:12:34.130932   73732 main.go:141] libmachine: (bridge-474762)     <boot dev='hd'/>
	I0421 20:12:34.130940   73732 main.go:141] libmachine: (bridge-474762)     <bootmenu enable='no'/>
	I0421 20:12:34.130947   73732 main.go:141] libmachine: (bridge-474762)   </os>
	I0421 20:12:34.130956   73732 main.go:141] libmachine: (bridge-474762)   <devices>
	I0421 20:12:34.130967   73732 main.go:141] libmachine: (bridge-474762)     <disk type='file' device='cdrom'>
	I0421 20:12:34.130999   73732 main.go:141] libmachine: (bridge-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/boot2docker.iso'/>
	I0421 20:12:34.131020   73732 main.go:141] libmachine: (bridge-474762)       <target dev='hdc' bus='scsi'/>
	I0421 20:12:34.131046   73732 main.go:141] libmachine: (bridge-474762)       <readonly/>
	I0421 20:12:34.131061   73732 main.go:141] libmachine: (bridge-474762)     </disk>
	I0421 20:12:34.131073   73732 main.go:141] libmachine: (bridge-474762)     <disk type='file' device='disk'>
	I0421 20:12:34.131086   73732 main.go:141] libmachine: (bridge-474762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 20:12:34.131111   73732 main.go:141] libmachine: (bridge-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/bridge-474762.rawdisk'/>
	I0421 20:12:34.131123   73732 main.go:141] libmachine: (bridge-474762)       <target dev='hda' bus='virtio'/>
	I0421 20:12:34.131133   73732 main.go:141] libmachine: (bridge-474762)     </disk>
	I0421 20:12:34.131143   73732 main.go:141] libmachine: (bridge-474762)     <interface type='network'>
	I0421 20:12:34.131151   73732 main.go:141] libmachine: (bridge-474762)       <source network='mk-bridge-474762'/>
	I0421 20:12:34.131172   73732 main.go:141] libmachine: (bridge-474762)       <model type='virtio'/>
	I0421 20:12:34.131183   73732 main.go:141] libmachine: (bridge-474762)     </interface>
	I0421 20:12:34.131200   73732 main.go:141] libmachine: (bridge-474762)     <interface type='network'>
	I0421 20:12:34.131213   73732 main.go:141] libmachine: (bridge-474762)       <source network='default'/>
	I0421 20:12:34.131223   73732 main.go:141] libmachine: (bridge-474762)       <model type='virtio'/>
	I0421 20:12:34.131231   73732 main.go:141] libmachine: (bridge-474762)     </interface>
	I0421 20:12:34.131242   73732 main.go:141] libmachine: (bridge-474762)     <serial type='pty'>
	I0421 20:12:34.131251   73732 main.go:141] libmachine: (bridge-474762)       <target port='0'/>
	I0421 20:12:34.131262   73732 main.go:141] libmachine: (bridge-474762)     </serial>
	I0421 20:12:34.131272   73732 main.go:141] libmachine: (bridge-474762)     <console type='pty'>
	I0421 20:12:34.131281   73732 main.go:141] libmachine: (bridge-474762)       <target type='serial' port='0'/>
	I0421 20:12:34.131291   73732 main.go:141] libmachine: (bridge-474762)     </console>
	I0421 20:12:34.131300   73732 main.go:141] libmachine: (bridge-474762)     <rng model='virtio'>
	I0421 20:12:34.131312   73732 main.go:141] libmachine: (bridge-474762)       <backend model='random'>/dev/random</backend>
	I0421 20:12:34.131329   73732 main.go:141] libmachine: (bridge-474762)     </rng>
	I0421 20:12:34.131351   73732 main.go:141] libmachine: (bridge-474762)     
	I0421 20:12:34.131365   73732 main.go:141] libmachine: (bridge-474762)     
	I0421 20:12:34.131376   73732 main.go:141] libmachine: (bridge-474762)   </devices>
	I0421 20:12:34.131389   73732 main.go:141] libmachine: (bridge-474762) </domain>
	I0421 20:12:34.131401   73732 main.go:141] libmachine: (bridge-474762) 
	I0421 20:12:34.136759   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:ce:a6:b7 in network default
	I0421 20:12:34.137527   73732 main.go:141] libmachine: (bridge-474762) Ensuring networks are active...
	I0421 20:12:34.137548   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:34.138499   73732 main.go:141] libmachine: (bridge-474762) Ensuring network default is active
	I0421 20:12:34.138920   73732 main.go:141] libmachine: (bridge-474762) Ensuring network mk-bridge-474762 is active
	I0421 20:12:34.139784   73732 main.go:141] libmachine: (bridge-474762) Getting domain xml...
	I0421 20:12:34.140557   73732 main.go:141] libmachine: (bridge-474762) Creating domain...
	I0421 20:12:35.542787   73732 main.go:141] libmachine: (bridge-474762) Waiting to get IP...
	I0421 20:12:35.543828   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:35.544357   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:35.544509   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:35.544428   73861 retry.go:31] will retry after 258.09788ms: waiting for machine to come up
	I0421 20:12:35.803943   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:35.804409   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:35.804429   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:35.804364   73861 retry.go:31] will retry after 322.953644ms: waiting for machine to come up
	I0421 20:12:36.128871   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:36.129435   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:36.129461   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:36.129396   73861 retry.go:31] will retry after 305.862308ms: waiting for machine to come up
	I0421 20:12:34.245578   70482 addons.go:505] duration metric: took 2.040710747s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0421 20:12:34.643126   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:36.647179   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:35.672563   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:35.675769   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:35.676150   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:35.676178   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:35.676478   72192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0421 20:12:35.681283   72192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:12:35.695237   72192 kubeadm.go:877] updating cluster {Name:flannel-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-474762
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:12:35.695376   72192 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:12:35.695416   72192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:12:35.740512   72192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 20:12:35.740573   72192 ssh_runner.go:195] Run: which lz4
	I0421 20:12:35.745048   72192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 20:12:35.749946   72192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 20:12:35.749968   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 20:12:37.580472   72192 crio.go:462] duration metric: took 1.835461419s to copy over tarball
	I0421 20:12:37.580538   72192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:12:36.436833   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:36.437544   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:36.437575   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:36.437496   73861 retry.go:31] will retry after 514.273827ms: waiting for machine to come up
	I0421 20:12:36.953081   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:36.953693   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:36.953718   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:36.953643   73861 retry.go:31] will retry after 481.725809ms: waiting for machine to come up
	I0421 20:12:37.437538   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:37.438241   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:37.438260   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:37.438159   73861 retry.go:31] will retry after 953.112004ms: waiting for machine to come up
	I0421 20:12:38.393130   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:38.393169   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:38.393186   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:38.393033   73861 retry.go:31] will retry after 810.769843ms: waiting for machine to come up
	I0421 20:12:39.205334   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:39.205909   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:39.205933   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:39.205852   73861 retry.go:31] will retry after 984.63759ms: waiting for machine to come up
	I0421 20:12:40.192463   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:40.193017   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:40.193045   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:40.192969   73861 retry.go:31] will retry after 1.246490815s: waiting for machine to come up
	I0421 20:12:39.145300   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:41.816379   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:40.460252   72192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87967524s)
	I0421 20:12:40.460283   72192 crio.go:469] duration metric: took 2.879780165s to extract the tarball
	I0421 20:12:40.460293   72192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:12:40.507379   72192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:12:40.562053   72192 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:12:40.562087   72192 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:12:40.562098   72192 kubeadm.go:928] updating node { 192.168.61.193 8443 v1.30.0 crio true true} ...
	I0421 20:12:40.562196   72192 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-474762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:flannel-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0421 20:12:40.562262   72192 ssh_runner.go:195] Run: crio config
	I0421 20:12:40.613881   72192 cni.go:84] Creating CNI manager for "flannel"
	I0421 20:12:40.613914   72192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:12:40.613936   72192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.193 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-474762 NodeName:flannel-474762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:12:40.614139   72192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-474762"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:12:40.614232   72192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:12:40.626153   72192 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:12:40.626220   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:12:40.638100   72192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0421 20:12:40.658861   72192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:12:40.679713   72192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0421 20:12:40.701954   72192 ssh_runner.go:195] Run: grep 192.168.61.193	control-plane.minikube.internal$ /etc/hosts
	I0421 20:12:40.707389   72192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:12:40.723703   72192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:12:40.859146   72192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:12:40.880212   72192 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762 for IP: 192.168.61.193
	I0421 20:12:40.880234   72192 certs.go:194] generating shared ca certs ...
	I0421 20:12:40.880249   72192 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:40.880398   72192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:12:40.880451   72192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:12:40.880464   72192 certs.go:256] generating profile certs ...
	I0421 20:12:40.880532   72192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.key
	I0421 20:12:40.880550   72192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt with IP's: []
	I0421 20:12:41.077359   72192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt ...
	I0421 20:12:41.077397   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: {Name:mkc17f8da1dbd414399caa0ace4fab4d8d169c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.077594   72192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.key ...
	I0421 20:12:41.077613   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.key: {Name:mk213f46ea1f77448d08b4645527411446138286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.077745   72192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6
	I0421 20:12:41.077768   72192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.193]
	I0421 20:12:41.240564   72192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6 ...
	I0421 20:12:41.240592   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6: {Name:mk29fcd92080aa6ef47d1810b5dd3464b8a192a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.306466   72192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6 ...
	I0421 20:12:41.306508   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6: {Name:mk9dd938fa76d12f420535efdfbf38a92567ab73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.306660   72192 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt
	I0421 20:12:41.306791   72192 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key
	I0421 20:12:41.306880   72192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key
	I0421 20:12:41.306900   72192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt with IP's: []
	I0421 20:12:41.357681   72192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt ...
	I0421 20:12:41.357707   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt: {Name:mk2a083e2046b9f05e37b262335a9bcd7a0b857b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.358655   72192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key ...
	I0421 20:12:41.358672   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key: {Name:mkbff6c0f3583e74e38c84ab7806698762d4abfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.358858   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:12:41.358890   72192 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:12:41.358900   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:12:41.358925   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:12:41.358946   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:12:41.358966   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:12:41.359005   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:12:41.359683   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:12:41.390769   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:12:41.417979   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:12:41.452227   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:12:41.483717   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0421 20:12:41.514583   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 20:12:41.564761   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:12:41.598195   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:12:41.691098   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:12:41.723493   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:12:41.755347   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:12:41.784237   72192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:12:41.804563   72192 ssh_runner.go:195] Run: openssl version
	I0421 20:12:41.811360   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:12:41.828268   72192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:12:41.833580   72192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:12:41.833635   72192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:12:41.840430   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:12:41.852439   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:12:41.864561   72192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:12:41.870407   72192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:12:41.870502   72192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:12:41.877032   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:12:41.889386   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:12:41.901532   72192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:12:41.908249   72192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:12:41.908303   72192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:12:41.915280   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:12:41.929403   72192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:12:41.935376   72192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:12:41.935442   72192 kubeadm.go:391] StartCluster: {Name:flannel-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-474762 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:12:41.935534   72192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:12:41.935616   72192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:12:41.976978   72192 cri.go:89] found id: ""
	I0421 20:12:41.977037   72192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:12:41.988304   72192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:12:41.998915   72192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:12:42.009394   72192 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:12:42.009419   72192 kubeadm.go:156] found existing configuration files:
	
	I0421 20:12:42.009471   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:12:42.023018   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:12:42.023068   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:12:42.036467   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:12:42.046578   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:12:42.046639   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:12:42.060445   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:12:42.072111   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:12:42.072174   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:12:42.083184   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:12:42.094029   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:12:42.094106   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:12:42.105504   72192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:12:42.164589   72192 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:12:42.164711   72192 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:12:42.319971   72192 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:12:42.320121   72192 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:12:42.320262   72192 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:12:42.588670   72192 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:12:42.591432   72192 out.go:204]   - Generating certificates and keys ...
	I0421 20:12:42.591528   72192 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:12:42.591608   72192 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:12:42.731282   72192 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 20:12:42.983351   72192 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 20:12:43.095317   72192 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 20:12:43.206072   72192 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 20:12:43.466252   72192 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 20:12:43.466590   72192 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [flannel-474762 localhost] and IPs [192.168.61.193 127.0.0.1 ::1]
	I0421 20:12:43.514694   72192 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 20:12:43.514955   72192 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [flannel-474762 localhost] and IPs [192.168.61.193 127.0.0.1 ::1]
	I0421 20:12:43.889877   72192 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 20:12:43.996775   72192 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 20:12:44.275943   72192 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 20:12:44.276843   72192 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:12:44.499582   72192 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:12:44.593886   72192 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:12:44.917393   72192 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:12:45.061380   72192 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:12:45.406186   72192 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:12:45.407455   72192 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:12:45.410292   72192 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:12:41.441446   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:41.493087   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:41.493118   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:41.441932   73861 retry.go:31] will retry after 1.979730834s: waiting for machine to come up
	I0421 20:12:43.423365   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:43.423901   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:43.423937   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:43.423844   73861 retry.go:31] will retry after 2.804462168s: waiting for machine to come up
	I0421 20:12:46.231392   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:46.231940   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:46.231980   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:46.231882   73861 retry.go:31] will retry after 3.463170537s: waiting for machine to come up
	I0421 20:12:44.144325   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:44.643899   70482 pod_ready.go:97] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.147 HostIPs:[{IP:192.168.39
.147}] PodIP: PodIPs:[] StartTime:2024-04-21 20:12:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:12:34 +0000 UTC,FinishedAt:2024-04-21 20:12:44 +0000 UTC,ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431 Started:0xc0037a9d00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:12:44.643941   70482 pod_ready.go:81] duration metric: took 12.007952005s for pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace to be "Ready" ...
	E0421 20:12:44.643956   70482 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.147 HostIPs:[{IP:192.168.39.147}] PodIP: PodIPs:[] StartTime:2024-04-21 20:12:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:12:34 +0000 UTC,FinishedAt:2024-04-21 20:12:44 +0000 UTC,ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431 Started:0xc0037a9d00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:12:44.643973   70482 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace to be "Ready" ...
	I0421 20:12:46.652416   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:45.412343   72192 out.go:204]   - Booting up control plane ...
	I0421 20:12:45.412481   72192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:12:45.412580   72192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:12:45.413219   72192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:12:45.433058   72192 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:12:45.435012   72192 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:12:45.435091   72192 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:12:45.580105   72192 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:12:45.580276   72192 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:12:46.580958   72192 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001296402s
	I0421 20:12:46.581082   72192 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:12:49.696701   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:49.697291   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:49.697318   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:49.697227   73861 retry.go:31] will retry after 3.570145567s: waiting for machine to come up
	I0421 20:12:48.653381   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:50.653659   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:52.082209   72192 kubeadm.go:309] [api-check] The API server is healthy after 5.501784628s
	I0421 20:12:52.096008   72192 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:12:52.113812   72192 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:12:52.149476   72192 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:12:52.149747   72192 kubeadm.go:309] [mark-control-plane] Marking the node flannel-474762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:12:52.166919   72192 kubeadm.go:309] [bootstrap-token] Using token: 7uvvlt.zezmhmug9wwgucft
	I0421 20:12:52.168399   72192 out.go:204]   - Configuring RBAC rules ...
	I0421 20:12:52.168519   72192 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:12:52.173908   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:12:52.189769   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:12:52.196946   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:12:52.201186   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:12:52.205617   72192 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:12:52.490661   72192 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:12:52.958707   72192 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:12:53.490742   72192 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:12:53.491944   72192 kubeadm.go:309] 
	I0421 20:12:53.492025   72192 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:12:53.492039   72192 kubeadm.go:309] 
	I0421 20:12:53.492128   72192 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:12:53.492138   72192 kubeadm.go:309] 
	I0421 20:12:53.492194   72192 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:12:53.492276   72192 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:12:53.492364   72192 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:12:53.492374   72192 kubeadm.go:309] 
	I0421 20:12:53.492482   72192 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:12:53.492501   72192 kubeadm.go:309] 
	I0421 20:12:53.492541   72192 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:12:53.492548   72192 kubeadm.go:309] 
	I0421 20:12:53.492591   72192 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:12:53.492708   72192 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:12:53.492816   72192 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:12:53.492884   72192 kubeadm.go:309] 
	I0421 20:12:53.493046   72192 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:12:53.493158   72192 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:12:53.493173   72192 kubeadm.go:309] 
	I0421 20:12:53.493284   72192 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7uvvlt.zezmhmug9wwgucft \
	I0421 20:12:53.493418   72192 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 20:12:53.493449   72192 kubeadm.go:309] 	--control-plane 
	I0421 20:12:53.493464   72192 kubeadm.go:309] 
	I0421 20:12:53.493574   72192 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:12:53.493582   72192 kubeadm.go:309] 
	I0421 20:12:53.493678   72192 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7uvvlt.zezmhmug9wwgucft \
	I0421 20:12:53.493847   72192 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 20:12:53.494566   72192 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:12:53.494610   72192 cni.go:84] Creating CNI manager for "flannel"
	I0421 20:12:53.496633   72192 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0421 20:12:53.271551   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:53.272046   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:53.272070   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:53.271997   73861 retry.go:31] will retry after 5.239553074s: waiting for machine to come up
	I0421 20:12:53.150597   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:55.152144   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:53.498032   72192 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 20:12:53.505199   72192 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 20:12:53.505219   72192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0421 20:12:53.530750   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 20:12:53.961168   72192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:12:53.961256   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:53.961251   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-474762 minikube.k8s.io/updated_at=2024_04_21T20_12_53_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=flannel-474762 minikube.k8s.io/primary=true
	I0421 20:12:53.983252   72192 ops.go:34] apiserver oom_adj: -16
	I0421 20:12:54.154688   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:54.654773   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:55.154782   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:55.654891   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:56.154706   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:56.655509   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:57.155269   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:58.512902   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.513457   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has current primary IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.513504   73732 main.go:141] libmachine: (bridge-474762) Found IP for machine: 192.168.50.35
	I0421 20:12:58.513530   73732 main.go:141] libmachine: (bridge-474762) Reserving static IP address...
	I0421 20:12:58.513899   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find host DHCP lease matching {name: "bridge-474762", mac: "52:54:00:46:ee:7b", ip: "192.168.50.35"} in network mk-bridge-474762
	I0421 20:12:58.591525   73732 main.go:141] libmachine: (bridge-474762) DBG | Getting to WaitForSSH function...
	I0421 20:12:58.591556   73732 main.go:141] libmachine: (bridge-474762) Reserved static IP address: 192.168.50.35
	I0421 20:12:58.591569   73732 main.go:141] libmachine: (bridge-474762) Waiting for SSH to be available...
	I0421 20:12:58.594246   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.594710   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.594745   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.594905   73732 main.go:141] libmachine: (bridge-474762) DBG | Using SSH client type: external
	I0421 20:12:58.594926   73732 main.go:141] libmachine: (bridge-474762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa (-rw-------)
	I0421 20:12:58.594953   73732 main.go:141] libmachine: (bridge-474762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 20:12:58.594966   73732 main.go:141] libmachine: (bridge-474762) DBG | About to run SSH command:
	I0421 20:12:58.594982   73732 main.go:141] libmachine: (bridge-474762) DBG | exit 0
	I0421 20:12:58.722944   73732 main.go:141] libmachine: (bridge-474762) DBG | SSH cmd err, output: <nil>: 
	I0421 20:12:58.723218   73732 main.go:141] libmachine: (bridge-474762) KVM machine creation complete!
	I0421 20:12:58.723572   73732 main.go:141] libmachine: (bridge-474762) Calling .GetConfigRaw
	I0421 20:12:58.724176   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:12:58.724416   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:12:58.724594   73732 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 20:12:58.724612   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:12:58.726199   73732 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 20:12:58.726215   73732 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 20:12:58.726222   73732 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 20:12:58.726230   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:58.728727   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.729172   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.729198   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.729404   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:58.729574   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.729718   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.729872   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:58.730047   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:58.730341   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:58.730360   73732 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 20:12:58.841974   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:58.842017   73732 main.go:141] libmachine: Detecting the provisioner...
	I0421 20:12:58.842030   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:58.844975   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.845324   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.845366   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.845457   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:58.845693   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.845886   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.846096   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:58.846268   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:58.846461   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:58.846476   73732 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 20:12:58.951527   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 20:12:58.951602   73732 main.go:141] libmachine: found compatible host: buildroot
	I0421 20:12:58.951613   73732 main.go:141] libmachine: Provisioning with buildroot...
	I0421 20:12:58.951621   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:58.951885   73732 buildroot.go:166] provisioning hostname "bridge-474762"
	I0421 20:12:58.951913   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:58.952084   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:58.954961   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.955213   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.955235   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.955388   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:58.955580   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.955768   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.955919   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:58.956077   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:58.956279   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:58.956299   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-474762 && echo "bridge-474762" | sudo tee /etc/hostname
	I0421 20:12:59.076059   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-474762
	
	I0421 20:12:59.076104   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.079018   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.079354   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.079384   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.079600   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:59.079775   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.079956   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.080081   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:59.080255   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:59.080461   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:59.080490   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-474762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-474762/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-474762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:12:59.193150   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:59.193179   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 20:12:59.193238   73732 buildroot.go:174] setting up certificates
	I0421 20:12:59.193251   73732 provision.go:84] configureAuth start
	I0421 20:12:59.193264   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:59.193555   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:12:59.196640   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.197050   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.197078   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.197266   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.199977   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.200375   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.200404   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.200566   73732 provision.go:143] copyHostCerts
	I0421 20:12:59.200620   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 20:12:59.200633   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 20:12:59.200695   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 20:12:59.200819   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 20:12:59.200831   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 20:12:59.200859   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 20:12:59.200934   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 20:12:59.200944   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 20:12:59.200967   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 20:12:59.201035   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.bridge-474762 san=[127.0.0.1 192.168.50.35 bridge-474762 localhost minikube]
	I0421 20:12:59.578123   73732 provision.go:177] copyRemoteCerts
	I0421 20:12:59.578186   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:12:59.578222   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.581175   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.581471   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.581500   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.581672   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:59.581873   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.582052   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:59.582244   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:12:59.671308   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 20:12:59.703145   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:12:59.731679   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 20:12:59.763930   73732 provision.go:87] duration metric: took 570.666818ms to configureAuth
	I0421 20:12:59.763963   73732 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:12:59.764215   73732 config.go:182] Loaded profile config "bridge-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:59.764313   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.767256   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.767621   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.767654   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.767850   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:59.768034   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.768184   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.768362   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:59.768540   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:59.768725   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:59.768745   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 20:13:00.070815   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 20:13:00.070848   73732 main.go:141] libmachine: Checking connection to Docker...
	I0421 20:13:00.070859   73732 main.go:141] libmachine: (bridge-474762) Calling .GetURL
	I0421 20:13:00.072025   73732 main.go:141] libmachine: (bridge-474762) DBG | Using libvirt version 6000000
	I0421 20:13:00.074632   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.074985   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.075023   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.075178   73732 main.go:141] libmachine: Docker is up and running!
	I0421 20:13:00.075196   73732 main.go:141] libmachine: Reticulating splines...
	I0421 20:13:00.075203   73732 client.go:171] duration metric: took 26.514809444s to LocalClient.Create
	I0421 20:13:00.075232   73732 start.go:167] duration metric: took 26.514871671s to libmachine.API.Create "bridge-474762"
	I0421 20:13:00.075251   73732 start.go:293] postStartSetup for "bridge-474762" (driver="kvm2")
	I0421 20:13:00.075266   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:13:00.075291   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.075521   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:13:00.075546   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.077973   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.078401   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.078433   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.078628   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.078823   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.078993   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.079160   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:00.163821   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:13:00.169566   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:13:00.169596   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 20:13:00.169670   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 20:13:00.169786   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 20:13:00.169925   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:13:00.181471   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:13:00.212417   73732 start.go:296] duration metric: took 137.147897ms for postStartSetup
	I0421 20:13:00.212481   73732 main.go:141] libmachine: (bridge-474762) Calling .GetConfigRaw
	I0421 20:13:00.213152   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:13:00.216240   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.216678   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.216707   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.217057   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/config.json ...
	I0421 20:13:00.217333   73732 start.go:128] duration metric: took 26.691860721s to createHost
	I0421 20:13:00.217366   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.220000   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.220313   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.220346   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.220487   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.220701   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.220897   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.221055   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.221226   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:13:00.221447   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:13:00.221458   73732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:13:00.327556   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713730380.306994750
	
	I0421 20:13:00.327600   73732 fix.go:216] guest clock: 1713730380.306994750
	I0421 20:13:00.327613   73732 fix.go:229] Guest: 2024-04-21 20:13:00.30699475 +0000 UTC Remote: 2024-04-21 20:13:00.217351909 +0000 UTC m=+38.939820834 (delta=89.642841ms)
	I0421 20:13:00.327649   73732 fix.go:200] guest clock delta is within tolerance: 89.642841ms
	I0421 20:13:00.327655   73732 start.go:83] releasing machines lock for "bridge-474762", held for 26.802411485s
	I0421 20:13:00.327701   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.328008   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:13:00.330915   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.331259   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.331288   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.331465   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.331923   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.332114   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.332228   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:13:00.332269   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.332350   73732 ssh_runner.go:195] Run: cat /version.json
	I0421 20:13:00.332375   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.334814   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335132   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335164   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.335187   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335354   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.335505   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.335561   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.335586   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335691   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.335786   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.335877   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:00.335968   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.336145   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.336304   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:00.411669   73732 ssh_runner.go:195] Run: systemctl --version
	I0421 20:13:00.436757   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 20:13:00.608302   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 20:13:00.615628   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:13:00.615684   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:13:00.634392   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:13:00.634415   73732 start.go:494] detecting cgroup driver to use...
	I0421 20:13:00.634492   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:13:00.654407   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:13:00.672781   73732 docker.go:217] disabling cri-docker service (if available) ...
	I0421 20:13:00.672855   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 20:13:00.690246   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 20:13:00.709940   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 20:13:00.858946   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 20:13:01.012900   73732 docker.go:233] disabling docker service ...
	I0421 20:13:01.012968   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 20:13:01.030449   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 20:13:01.044904   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 20:13:01.191789   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 20:13:01.324317   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 20:13:01.341434   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:13:01.363396   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 20:13:01.363454   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.375746   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 20:13:01.375836   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.387909   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.401130   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.413072   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:13:01.425609   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.437685   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.458571   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.470843   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:13:01.481835   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 20:13:01.481897   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 20:13:01.497632   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:13:01.509462   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:01.645265   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 20:13:01.842015   73732 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 20:13:01.842117   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 20:13:01.847647   73732 start.go:562] Will wait 60s for crictl version
	I0421 20:13:01.847699   73732 ssh_runner.go:195] Run: which crictl
	I0421 20:13:01.852455   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:13:01.902259   73732 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 20:13:01.902346   73732 ssh_runner.go:195] Run: crio --version
	I0421 20:13:01.935242   73732 ssh_runner.go:195] Run: crio --version
	I0421 20:13:01.969320   73732 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 20:12:57.651527   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:59.657462   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:57.655155   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:58.155241   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:58.655272   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:59.154754   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:59.655756   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:00.155010   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:00.654868   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:01.155384   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:01.654997   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:02.155290   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:01.970923   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:13:01.973997   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:01.974373   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:01.974400   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:01.974728   73732 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0421 20:13:01.980059   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:13:01.995358   73732 kubeadm.go:877] updating cluster {Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:13:01.995477   73732 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:13:01.995537   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:13:02.033071   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 20:13:02.033139   73732 ssh_runner.go:195] Run: which lz4
	I0421 20:13:02.038142   73732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 20:13:02.042763   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 20:13:02.042785   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 20:13:03.798993   73732 crio.go:462] duration metric: took 1.760877713s to copy over tarball
	I0421 20:13:03.799081   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:13:02.155320   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:04.653886   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:02.655143   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:03.155732   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:03.655254   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:04.154980   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:04.654695   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:05.154770   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:05.655256   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:06.155249   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:06.655609   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:07.643449   72192 kubeadm.go:1107] duration metric: took 13.682253671s to wait for elevateKubeSystemPrivileges
	W0421 20:13:07.643486   72192 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:13:07.643493   72192 kubeadm.go:393] duration metric: took 25.708065058s to StartCluster
	I0421 20:13:07.643511   72192 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.643585   72192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:13:07.645549   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.645763   72192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 20:13:07.645779   72192 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:13:07.647331   72192 out.go:177] * Verifying Kubernetes components...
	I0421 20:13:07.645814   72192 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:13:07.645992   72192 config.go:182] Loaded profile config "flannel-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:13:07.648695   72192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:07.647400   72192 addons.go:69] Setting storage-provisioner=true in profile "flannel-474762"
	I0421 20:13:07.648814   72192 addons.go:234] Setting addon storage-provisioner=true in "flannel-474762"
	I0421 20:13:07.648908   72192 host.go:66] Checking if "flannel-474762" exists ...
	I0421 20:13:07.647412   72192 addons.go:69] Setting default-storageclass=true in profile "flannel-474762"
	I0421 20:13:07.649205   72192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-474762"
	I0421 20:13:07.649410   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.649462   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.649655   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.649695   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.667623   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0421 20:13:07.668032   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.668559   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.668585   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.668916   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.669104   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:13:07.671367   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I0421 20:13:07.671780   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.672276   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.672306   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.672679   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.673140   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.673173   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.673492   72192 addons.go:234] Setting addon default-storageclass=true in "flannel-474762"
	I0421 20:13:07.673528   72192 host.go:66] Checking if "flannel-474762" exists ...
	I0421 20:13:07.673872   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.673916   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.690241   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0421 20:13:07.690706   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.691226   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.691250   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.691645   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.691889   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:13:07.693529   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:13:07.695485   72192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:13:07.697027   72192 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:07.697045   72192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:13:07.697062   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:13:07.699703   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.700104   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:13:07.700129   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.700257   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:13:07.700441   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:13:07.700604   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:13:07.700741   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:13:07.702488   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0421 20:13:07.702948   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.703401   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.703417   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.703740   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.704256   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.704301   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.720661   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0421 20:13:07.721240   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.721797   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.721822   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.722189   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.722388   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:13:07.724131   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:13:07.724413   72192 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:07.724432   72192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:13:07.724450   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:13:07.727442   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.727948   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:13:07.727970   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.728008   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:13:07.728599   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:13:07.728814   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:13:07.729002   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:13:07.937567   72192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:13:07.937740   72192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 20:13:07.979152   72192 node_ready.go:35] waiting up to 15m0s for node "flannel-474762" to be "Ready" ...
	I0421 20:13:08.095582   72192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:08.209889   72192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:08.554346   72192 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0421 20:13:08.554443   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:08.554467   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:08.554830   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:08.554882   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:08.554902   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:08.554919   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:08.554891   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:08.555239   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:08.555287   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:08.555305   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:08.570091   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:08.570120   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:08.570825   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:08.570877   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:08.570892   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:09.002754   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:09.002777   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:09.003055   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:09.003084   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:09.003095   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:09.003107   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:09.003505   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:09.003561   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:09.006609   72192 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0421 20:13:09.003507   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:06.858589   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.05947899s)
	I0421 20:13:06.858618   73732 crio.go:469] duration metric: took 3.059597563s to extract the tarball
	I0421 20:13:06.858629   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:13:06.899937   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:13:06.960119   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:13:06.960139   73732 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:13:06.960146   73732 kubeadm.go:928] updating node { 192.168.50.35 8443 v1.30.0 crio true true} ...
	I0421 20:13:06.960264   73732 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-474762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0421 20:13:06.960363   73732 ssh_runner.go:195] Run: crio config
	I0421 20:13:07.017595   73732 cni.go:84] Creating CNI manager for "bridge"
	I0421 20:13:07.017626   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:13:07.017649   73732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.35 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-474762 NodeName:bridge-474762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:13:07.017797   73732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-474762"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:13:07.017852   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:13:07.029889   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:13:07.029962   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:13:07.040628   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0421 20:13:07.063906   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:13:07.082951   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0421 20:13:07.102288   73732 ssh_runner.go:195] Run: grep 192.168.50.35	control-plane.minikube.internal$ /etc/hosts
	I0421 20:13:07.106666   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.35	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:13:07.120702   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:07.265981   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:13:07.285107   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762 for IP: 192.168.50.35
	I0421 20:13:07.285130   73732 certs.go:194] generating shared ca certs ...
	I0421 20:13:07.285149   73732 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.285368   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:13:07.285427   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:13:07.285448   73732 certs.go:256] generating profile certs ...
	I0421 20:13:07.285517   73732 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.key
	I0421 20:13:07.285536   73732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt with IP's: []
	I0421 20:13:07.605681   73732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt ...
	I0421 20:13:07.605719   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: {Name:mk38bef37a27f99facbe20e2098d106558015f16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.605932   73732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.key ...
	I0421 20:13:07.605949   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.key: {Name:mk7af3c804a2486eec74e2c8abd8813e7941b34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.606079   73732 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5
	I0421 20:13:07.606101   73732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.35]
	I0421 20:13:07.764263   73732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5 ...
	I0421 20:13:07.764291   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5: {Name:mk134af05868bf23ad3534ea8aaefa1f3c91ed55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.764436   73732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5 ...
	I0421 20:13:07.764454   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5: {Name:mkb6ed907eb4b4b4bcb788b6ee72b93cf7939671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.764566   73732 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt
	I0421 20:13:07.764678   73732 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key
	I0421 20:13:07.764735   73732 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key
	I0421 20:13:07.764750   73732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt with IP's: []
	I0421 20:13:07.966757   73732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt ...
	I0421 20:13:07.966784   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt: {Name:mk930878172e737a3210d35d0129c249edfa25c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.966970   73732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key ...
	I0421 20:13:07.966989   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key: {Name:mk1d320a394635b7646a07c0714737b624ac242f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.967231   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:13:07.967273   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:13:07.967325   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:13:07.967359   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:13:07.967390   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:13:07.967420   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:13:07.967476   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:13:07.968251   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:13:08.003296   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:13:08.032115   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:13:08.059930   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:13:08.094029   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0421 20:13:08.126679   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 20:13:08.157460   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:13:08.193094   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 20:13:08.228277   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:13:08.267056   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:13:08.301616   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:13:08.336511   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:13:08.360262   73732 ssh_runner.go:195] Run: openssl version
	I0421 20:13:08.367585   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:13:08.381306   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:13:08.386781   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:13:08.386846   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:13:08.393759   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:13:08.406828   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:13:08.419328   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:13:08.425353   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:13:08.425419   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:13:08.432115   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:13:08.445176   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:13:08.459186   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:13:08.464852   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:13:08.464965   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:13:08.472060   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:13:08.484653   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:13:08.489871   73732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:13:08.489938   73732 kubeadm.go:391] StartCluster: {Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:13:08.490032   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:13:08.490115   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:13:08.542184   73732 cri.go:89] found id: ""
	I0421 20:13:08.542274   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:13:08.557818   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:13:08.570702   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:13:08.583601   73732 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:13:08.583623   73732 kubeadm.go:156] found existing configuration files:
	
	I0421 20:13:08.583668   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:13:08.595186   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:13:08.595265   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:13:08.608307   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:13:08.623730   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:13:08.623812   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:13:08.637568   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:13:08.649870   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:13:08.649935   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:13:08.664876   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:13:08.681703   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:13:08.681766   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:13:08.712011   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:13:08.784150   73732 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:13:08.784240   73732 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:13:08.937564   73732 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:13:08.937707   73732 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:13:08.937833   73732 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:13:09.243153   73732 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:13:09.245263   73732 out.go:204]   - Generating certificates and keys ...
	I0421 20:13:09.245398   73732 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:13:09.245529   73732 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:13:09.471208   73732 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 20:13:09.591901   73732 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 20:13:09.768935   73732 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 20:13:09.957888   73732 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 20:13:10.078525   73732 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 20:13:10.078684   73732 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [bridge-474762 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	I0421 20:13:10.240646   73732 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 20:13:10.240834   73732 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [bridge-474762 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	I0421 20:13:10.458251   73732 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 20:13:10.795103   73732 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 20:13:10.986823   73732 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 20:13:10.986910   73732 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:13:11.127092   73732 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:13:11.439115   73732 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:13:11.532698   73732 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:13:11.700537   73732 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:13:11.963479   73732 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:13:11.966199   73732 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:13:11.974389   73732 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:13:07.152885   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:09.652064   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:09.008179   72192 addons.go:505] duration metric: took 1.362368384s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0421 20:13:09.059435   72192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-474762" context rescaled to 1 replicas
	I0421 20:13:09.983449   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:12.485090   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:12.152339   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:13.153753   70482 pod_ready.go:92] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.153783   70482 pod_ready.go:81] duration metric: took 28.509799697s for pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.153797   70482 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.161854   70482 pod_ready.go:92] pod "etcd-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.161877   70482 pod_ready.go:81] duration metric: took 8.071208ms for pod "etcd-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.161892   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.168354   70482 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.168377   70482 pod_ready.go:81] duration metric: took 6.476734ms for pod "kube-apiserver-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.168390   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.173246   70482 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.173271   70482 pod_ready.go:81] duration metric: took 4.871919ms for pod "kube-controller-manager-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.173282   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-wgg4k" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.177972   70482 pod_ready.go:92] pod "kube-proxy-wgg4k" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.177997   70482 pod_ready.go:81] duration metric: took 4.706452ms for pod "kube-proxy-wgg4k" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.178009   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.549496   70482 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.549528   70482 pod_ready.go:81] duration metric: took 371.510124ms for pod "kube-scheduler-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.549539   70482 pod_ready.go:38] duration metric: took 40.943019237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:13.549556   70482 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:13:13.549615   70482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:13:13.572571   70482 api_server.go:72] duration metric: took 41.367404134s to wait for apiserver process to appear ...
	I0421 20:13:13.572610   70482 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:13:13.572641   70482 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0421 20:13:13.577758   70482 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0421 20:13:13.579081   70482 api_server.go:141] control plane version: v1.30.0
	I0421 20:13:13.579104   70482 api_server.go:131] duration metric: took 6.485234ms to wait for apiserver health ...
	I0421 20:13:13.579114   70482 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:13:13.751726   70482 system_pods.go:59] 7 kube-system pods found
	I0421 20:13:13.751765   70482 system_pods.go:61] "coredns-7db6d8ff4d-xn48s" [0de9c7fe-f4ff-4fa7-975f-e5d997794cc0] Running
	I0421 20:13:13.751772   70482 system_pods.go:61] "etcd-enable-default-cni-474762" [94751a3f-7155-4898-a58a-dec8f3dbfeb9] Running
	I0421 20:13:13.751776   70482 system_pods.go:61] "kube-apiserver-enable-default-cni-474762" [9123f173-e342-4d62-a0a7-5c1af286a9e3] Running
	I0421 20:13:13.751780   70482 system_pods.go:61] "kube-controller-manager-enable-default-cni-474762" [6194a232-ff72-48a5-a5ed-30f318f551b1] Running
	I0421 20:13:13.751783   70482 system_pods.go:61] "kube-proxy-wgg4k" [f625ecf0-3d23-433a-9a09-ab316cafb2f0] Running
	I0421 20:13:13.751786   70482 system_pods.go:61] "kube-scheduler-enable-default-cni-474762" [e5eed32b-7fb6-485c-ae85-023720b92a69] Running
	I0421 20:13:13.751789   70482 system_pods.go:61] "storage-provisioner" [8cd24301-2b24-4237-8e9d-475a64634f41] Running
	I0421 20:13:13.751796   70482 system_pods.go:74] duration metric: took 172.674755ms to wait for pod list to return data ...
	I0421 20:13:13.751803   70482 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:13:13.948189   70482 default_sa.go:45] found service account: "default"
	I0421 20:13:13.948224   70482 default_sa.go:55] duration metric: took 196.415194ms for default service account to be created ...
	I0421 20:13:13.948233   70482 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:13:14.152936   70482 system_pods.go:86] 7 kube-system pods found
	I0421 20:13:14.152964   70482 system_pods.go:89] "coredns-7db6d8ff4d-xn48s" [0de9c7fe-f4ff-4fa7-975f-e5d997794cc0] Running
	I0421 20:13:14.152970   70482 system_pods.go:89] "etcd-enable-default-cni-474762" [94751a3f-7155-4898-a58a-dec8f3dbfeb9] Running
	I0421 20:13:14.152975   70482 system_pods.go:89] "kube-apiserver-enable-default-cni-474762" [9123f173-e342-4d62-a0a7-5c1af286a9e3] Running
	I0421 20:13:14.152979   70482 system_pods.go:89] "kube-controller-manager-enable-default-cni-474762" [6194a232-ff72-48a5-a5ed-30f318f551b1] Running
	I0421 20:13:14.152983   70482 system_pods.go:89] "kube-proxy-wgg4k" [f625ecf0-3d23-433a-9a09-ab316cafb2f0] Running
	I0421 20:13:14.152987   70482 system_pods.go:89] "kube-scheduler-enable-default-cni-474762" [e5eed32b-7fb6-485c-ae85-023720b92a69] Running
	I0421 20:13:14.152991   70482 system_pods.go:89] "storage-provisioner" [8cd24301-2b24-4237-8e9d-475a64634f41] Running
	I0421 20:13:14.152996   70482 system_pods.go:126] duration metric: took 204.758306ms to wait for k8s-apps to be running ...
	I0421 20:13:14.153004   70482 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:13:14.153043   70482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:13:14.172989   70482 system_svc.go:56] duration metric: took 19.974815ms WaitForService to wait for kubelet
	I0421 20:13:14.173028   70482 kubeadm.go:576] duration metric: took 41.967867255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:13:14.173054   70482 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:13:14.350556   70482 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:13:14.350587   70482 node_conditions.go:123] node cpu capacity is 2
	I0421 20:13:14.350601   70482 node_conditions.go:105] duration metric: took 177.541558ms to run NodePressure ...
	I0421 20:13:14.350616   70482 start.go:240] waiting for startup goroutines ...
	I0421 20:13:14.350626   70482 start.go:245] waiting for cluster config update ...
	I0421 20:13:14.350639   70482 start.go:254] writing updated cluster config ...
	I0421 20:13:14.350986   70482 ssh_runner.go:195] Run: rm -f paused
	I0421 20:13:14.416852   70482 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:13:14.418906   70482 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-474762" cluster and "default" namespace by default
	I0421 20:13:11.975842   73732 out.go:204]   - Booting up control plane ...
	I0421 20:13:11.975958   73732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:13:11.976062   73732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:13:11.976624   73732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:13:12.005975   73732 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:13:12.006154   73732 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:13:12.006210   73732 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:13:12.164312   73732 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:13:12.164415   73732 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:13:12.665474   73732 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.663722ms
	I0421 20:13:12.665586   73732 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:13:14.486548   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:16.983558   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:18.166872   73732 kubeadm.go:309] [api-check] The API server is healthy after 5.502240103s
	I0421 20:13:18.194686   73732 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:13:18.218931   73732 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:13:18.306951   73732 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:13:18.307196   73732 kubeadm.go:309] [mark-control-plane] Marking the node bridge-474762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:13:18.335811   73732 kubeadm.go:309] [bootstrap-token] Using token: jlj9t3.y9mg1ccu6iugp1il
	I0421 20:13:18.338390   73732 out.go:204]   - Configuring RBAC rules ...
	I0421 20:13:18.338527   73732 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:13:18.351387   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:13:18.384358   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:13:18.400734   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:13:18.420787   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:13:18.430500   73732 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:13:18.578363   73732 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:13:19.026211   73732 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:13:19.840397   73732 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:13:19.841701   73732 kubeadm.go:309] 
	I0421 20:13:19.841815   73732 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:13:19.841842   73732 kubeadm.go:309] 
	I0421 20:13:19.841952   73732 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:13:19.841963   73732 kubeadm.go:309] 
	I0421 20:13:19.842004   73732 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:13:19.842117   73732 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:13:19.842202   73732 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:13:19.842213   73732 kubeadm.go:309] 
	I0421 20:13:19.842286   73732 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:13:19.842297   73732 kubeadm.go:309] 
	I0421 20:13:19.842352   73732 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:13:19.842362   73732 kubeadm.go:309] 
	I0421 20:13:19.842430   73732 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:13:19.842512   73732 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:13:19.842596   73732 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:13:19.842607   73732 kubeadm.go:309] 
	I0421 20:13:19.842730   73732 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:13:19.842850   73732 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:13:19.842858   73732 kubeadm.go:309] 
	I0421 20:13:19.842976   73732 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jlj9t3.y9mg1ccu6iugp1il \
	I0421 20:13:19.843160   73732 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 20:13:19.843200   73732 kubeadm.go:309] 	--control-plane 
	I0421 20:13:19.843218   73732 kubeadm.go:309] 
	I0421 20:13:19.843348   73732 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:13:19.843370   73732 kubeadm.go:309] 
	I0421 20:13:19.843513   73732 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jlj9t3.y9mg1ccu6iugp1il \
	I0421 20:13:19.843662   73732 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 20:13:19.843925   73732 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:13:19.843967   73732 cni.go:84] Creating CNI manager for "bridge"
	I0421 20:13:19.853961   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:13:19.855837   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:13:19.871124   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:13:19.899860   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:13:19.900002   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:19.900095   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-474762 minikube.k8s.io/updated_at=2024_04_21T20_13_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=bridge-474762 minikube.k8s.io/primary=true
	I0421 20:13:20.112485   73732 ops.go:34] apiserver oom_adj: -16
	I0421 20:13:20.112601   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:20.612947   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:21.113482   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:18.983729   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:20.483066   72192 node_ready.go:49] node "flannel-474762" has status "Ready":"True"
	I0421 20:13:20.483091   72192 node_ready.go:38] duration metric: took 12.503897106s for node "flannel-474762" to be "Ready" ...
	I0421 20:13:20.483103   72192 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:20.490733   72192 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:22.497745   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:21.612638   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:22.113624   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:22.613407   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:23.113479   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:23.613270   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:24.113231   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:24.612659   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:25.113113   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:25.613596   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:26.113330   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:24.498086   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:26.997873   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:26.613367   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:27.112631   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:27.612934   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:28.113238   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:28.613334   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:29.113110   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:29.613387   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:30.113077   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:30.613151   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:31.112853   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:31.613369   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:32.113262   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:32.270897   73732 kubeadm.go:1107] duration metric: took 12.370941451s to wait for elevateKubeSystemPrivileges
	W0421 20:13:32.270939   73732 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:13:32.270948   73732 kubeadm.go:393] duration metric: took 23.781015701s to StartCluster
	I0421 20:13:32.270970   73732 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:32.271042   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:13:32.273002   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:32.273221   73732 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:13:32.275028   73732 out.go:177] * Verifying Kubernetes components...
	I0421 20:13:32.273320   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 20:13:32.273343   73732 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:13:32.273505   73732 config.go:182] Loaded profile config "bridge-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:13:32.276834   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:32.276978   73732 addons.go:69] Setting storage-provisioner=true in profile "bridge-474762"
	I0421 20:13:32.277005   73732 addons.go:234] Setting addon storage-provisioner=true in "bridge-474762"
	I0421 20:13:32.277032   73732 host.go:66] Checking if "bridge-474762" exists ...
	I0421 20:13:32.277391   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.277408   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.277474   73732 addons.go:69] Setting default-storageclass=true in profile "bridge-474762"
	I0421 20:13:32.277503   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-474762"
	I0421 20:13:32.277873   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.277896   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.294466   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I0421 20:13:32.294689   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0421 20:13:32.294967   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.295054   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.295476   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.295490   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.295836   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.296440   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.296464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.296701   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.296718   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.298802   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.299003   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:13:32.303342   73732 addons.go:234] Setting addon default-storageclass=true in "bridge-474762"
	I0421 20:13:32.303383   73732 host.go:66] Checking if "bridge-474762" exists ...
	I0421 20:13:32.303733   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.303762   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.314810   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0421 20:13:32.315273   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.315733   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.315749   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.316076   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.316266   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:13:32.317782   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:32.319768   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:13:29.498089   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:31.998654   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:32.321159   73732 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:32.321177   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:13:32.321194   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:32.323894   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.324604   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0421 20:13:32.324967   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.325345   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:32.325364   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.325514   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:32.325650   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:32.326030   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.326048   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.326223   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:32.326339   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:32.326566   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.327005   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.327036   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.346651   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I0421 20:13:32.347069   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.347564   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.347582   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.347960   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.348175   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:13:32.353346   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:32.353664   73732 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:32.353681   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:13:32.353700   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:32.356322   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.356675   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:32.356697   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.356814   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:32.356974   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:32.357099   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:32.357210   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:32.592039   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:13:32.673336   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 20:13:32.674714   73732 node_ready.go:35] waiting up to 15m0s for node "bridge-474762" to be "Ready" ...
	I0421 20:13:32.738875   73732 node_ready.go:49] node "bridge-474762" has status "Ready":"True"
	I0421 20:13:32.738908   73732 node_ready.go:38] duration metric: took 64.170466ms for node "bridge-474762" to be "Ready" ...
	I0421 20:13:32.738920   73732 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:32.784377   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:32.814553   73732 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:32.844522   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:33.586943   73732 start.go:946] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0421 20:13:33.587016   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:33.587045   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:33.587313   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:33.587335   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:33.587340   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:33.587363   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:33.587371   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:33.587752   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:33.587805   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:33.587816   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:33.598277   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:33.598298   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:33.598682   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:33.598684   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:33.598703   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:34.098597   73732 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-474762" context rescaled to 1 replicas
	I0421 20:13:34.252141   73732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.407581868s)
	I0421 20:13:34.252194   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:34.252208   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:34.252588   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:34.252621   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:34.252637   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:34.252651   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:34.252660   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:34.252905   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:34.252918   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:34.254729   73732 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0421 20:13:34.256291   73732 addons.go:505] duration metric: took 1.982948284s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0421 20:13:34.826718   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:34.007036   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:35.499399   72192 pod_ready.go:92] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.499421   72192 pod_ready.go:81] duration metric: took 15.008658343s for pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.499430   72192 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.505075   72192 pod_ready.go:92] pod "etcd-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.505098   72192 pod_ready.go:81] duration metric: took 5.659703ms for pod "etcd-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.505110   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.509795   72192 pod_ready.go:92] pod "kube-apiserver-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.509814   72192 pod_ready.go:81] duration metric: took 4.694619ms for pod "kube-apiserver-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.509825   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.514945   72192 pod_ready.go:92] pod "kube-controller-manager-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.514966   72192 pod_ready.go:81] duration metric: took 5.132029ms for pod "kube-controller-manager-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.514979   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-4gmfm" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.519804   72192 pod_ready.go:92] pod "kube-proxy-4gmfm" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.519834   72192 pod_ready.go:81] duration metric: took 4.846952ms for pod "kube-proxy-4gmfm" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.519853   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.896620   72192 pod_ready.go:92] pod "kube-scheduler-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.896650   72192 pod_ready.go:81] duration metric: took 376.789363ms for pod "kube-scheduler-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.896661   72192 pod_ready.go:38] duration metric: took 15.413547538s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:35.896675   72192 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:13:35.896726   72192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:13:35.917584   72192 api_server.go:72] duration metric: took 28.271775974s to wait for apiserver process to appear ...
	I0421 20:13:35.917611   72192 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:13:35.917632   72192 api_server.go:253] Checking apiserver healthz at https://192.168.61.193:8443/healthz ...
	I0421 20:13:35.923813   72192 api_server.go:279] https://192.168.61.193:8443/healthz returned 200:
	ok
	I0421 20:13:35.925274   72192 api_server.go:141] control plane version: v1.30.0
	I0421 20:13:35.925293   72192 api_server.go:131] duration metric: took 7.674656ms to wait for apiserver health ...
	I0421 20:13:35.925303   72192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:13:36.100733   72192 system_pods.go:59] 7 kube-system pods found
	I0421 20:13:36.100768   72192 system_pods.go:61] "coredns-7db6d8ff4d-2sh9b" [1f8f4071-8007-4f4b-8b9a-8b24f1548b3c] Running
	I0421 20:13:36.100776   72192 system_pods.go:61] "etcd-flannel-474762" [81d3d998-92c8-42b3-8c04-996a538e51ad] Running
	I0421 20:13:36.100781   72192 system_pods.go:61] "kube-apiserver-flannel-474762" [304ec604-34e2-4acf-9731-c02e79ed97af] Running
	I0421 20:13:36.100786   72192 system_pods.go:61] "kube-controller-manager-flannel-474762" [b8cd61c8-b9c3-4a1b-98da-becd00c4d3fe] Running
	I0421 20:13:36.100791   72192 system_pods.go:61] "kube-proxy-4gmfm" [b98d303b-12ea-4d1d-9c9c-768eedc98a02] Running
	I0421 20:13:36.100796   72192 system_pods.go:61] "kube-scheduler-flannel-474762" [798b1fa8-b941-4aaf-a0b2-d633bed69ee4] Running
	I0421 20:13:36.100800   72192 system_pods.go:61] "storage-provisioner" [2ae2cee4-1115-4004-8033-7c296c63d587] Running
	I0421 20:13:36.100809   72192 system_pods.go:74] duration metric: took 175.498295ms to wait for pod list to return data ...
	I0421 20:13:36.100818   72192 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:13:36.294758   72192 default_sa.go:45] found service account: "default"
	I0421 20:13:36.294786   72192 default_sa.go:55] duration metric: took 193.954425ms for default service account to be created ...
	I0421 20:13:36.294797   72192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:13:36.499318   72192 system_pods.go:86] 7 kube-system pods found
	I0421 20:13:36.499340   72192 system_pods.go:89] "coredns-7db6d8ff4d-2sh9b" [1f8f4071-8007-4f4b-8b9a-8b24f1548b3c] Running
	I0421 20:13:36.499346   72192 system_pods.go:89] "etcd-flannel-474762" [81d3d998-92c8-42b3-8c04-996a538e51ad] Running
	I0421 20:13:36.499350   72192 system_pods.go:89] "kube-apiserver-flannel-474762" [304ec604-34e2-4acf-9731-c02e79ed97af] Running
	I0421 20:13:36.499355   72192 system_pods.go:89] "kube-controller-manager-flannel-474762" [b8cd61c8-b9c3-4a1b-98da-becd00c4d3fe] Running
	I0421 20:13:36.499368   72192 system_pods.go:89] "kube-proxy-4gmfm" [b98d303b-12ea-4d1d-9c9c-768eedc98a02] Running
	I0421 20:13:36.499372   72192 system_pods.go:89] "kube-scheduler-flannel-474762" [798b1fa8-b941-4aaf-a0b2-d633bed69ee4] Running
	I0421 20:13:36.499376   72192 system_pods.go:89] "storage-provisioner" [2ae2cee4-1115-4004-8033-7c296c63d587] Running
	I0421 20:13:36.499382   72192 system_pods.go:126] duration metric: took 204.579054ms to wait for k8s-apps to be running ...
	I0421 20:13:36.499394   72192 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:13:36.499432   72192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:13:36.521795   72192 system_svc.go:56] duration metric: took 22.393237ms WaitForService to wait for kubelet
	I0421 20:13:36.521823   72192 kubeadm.go:576] duration metric: took 28.876017111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:13:36.521855   72192 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:13:36.695108   72192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:13:36.695135   72192 node_conditions.go:123] node cpu capacity is 2
	I0421 20:13:36.695154   72192 node_conditions.go:105] duration metric: took 173.293814ms to run NodePressure ...
	I0421 20:13:36.695167   72192 start.go:240] waiting for startup goroutines ...
	I0421 20:13:36.695176   72192 start.go:245] waiting for cluster config update ...
	I0421 20:13:36.695188   72192 start.go:254] writing updated cluster config ...
	I0421 20:13:36.695399   72192 ssh_runner.go:195] Run: rm -f paused
	I0421 20:13:36.760642   72192 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:13:36.763630   72192 out.go:177] * Done! kubectl is now configured to use "flannel-474762" cluster and "default" namespace by default
	I0421 20:13:37.330538   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:39.822415   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:41.824271   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:44.322460   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:44.830308   73732 pod_ready.go:97] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.35 HostIPs:[{IP:192.168.50.
35}] PodIP: PodIPs:[] StartTime:2024-04-21 20:13:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:13:34 +0000 UTC,FinishedAt:2024-04-21 20:13:44 +0000 UTC,ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d Started:0xc002a76400 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:13:44.830348   73732 pod_ready.go:81] duration metric: took 12.01576578s for pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace to be "Ready" ...
	E0421 20:13:44.830363   73732 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.35 HostIPs:[{IP:192.168.50.35}] PodIP: PodIPs:[] StartTime:2024-04-21 20:13:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:13:34 +0000 UTC,FinishedAt:2024-04-21 20:13:44 +0000 UTC,ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d Started:0xc002a76400 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:13:44.830375   73732 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:46.838586   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:49.338750   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:51.837412   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:54.337103   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:56.338884   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:58.837198   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:00.838008   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:03.339443   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:05.339596   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:07.841981   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:10.337430   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:12.337457   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:13.336935   73732 pod_ready.go:92] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.336957   73732 pod_ready.go:81] duration metric: took 28.506572825s for pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.336966   73732 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.341378   73732 pod_ready.go:92] pod "etcd-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.341394   73732 pod_ready.go:81] duration metric: took 4.423034ms for pod "etcd-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.341402   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.346180   73732 pod_ready.go:92] pod "kube-apiserver-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.346203   73732 pod_ready.go:81] duration metric: took 4.795357ms for pod "kube-apiserver-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.346217   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.351294   73732 pod_ready.go:92] pod "kube-controller-manager-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.351314   73732 pod_ready.go:81] duration metric: took 5.086902ms for pod "kube-controller-manager-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.351323   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-7m4zl" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.355465   73732 pod_ready.go:92] pod "kube-proxy-7m4zl" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.355480   73732 pod_ready.go:81] duration metric: took 4.151092ms for pod "kube-proxy-7m4zl" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.355487   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.734460   73732 pod_ready.go:92] pod "kube-scheduler-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.734479   73732 pod_ready.go:81] duration metric: took 378.985254ms for pod "kube-scheduler-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.734490   73732 pod_ready.go:38] duration metric: took 40.995554584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:14:13.734502   73732 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:14:13.734546   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:14:13.751229   73732 api_server.go:72] duration metric: took 41.477977543s to wait for apiserver process to appear ...
	I0421 20:14:13.751246   73732 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:14:13.751261   73732 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0421 20:14:13.755457   73732 api_server.go:279] https://192.168.50.35:8443/healthz returned 200:
	ok
	I0421 20:14:13.756384   73732 api_server.go:141] control plane version: v1.30.0
	I0421 20:14:13.756399   73732 api_server.go:131] duration metric: took 5.147985ms to wait for apiserver health ...
	I0421 20:14:13.756406   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:14:13.938964   73732 system_pods.go:59] 7 kube-system pods found
	I0421 20:14:13.939001   73732 system_pods.go:61] "coredns-7db6d8ff4d-s2pv8" [9cda56e7-d4f6-4810-959d-ecfba76f4bd1] Running
	I0421 20:14:13.939007   73732 system_pods.go:61] "etcd-bridge-474762" [181cb621-383f-4ede-b8a3-863219989782] Running
	I0421 20:14:13.939013   73732 system_pods.go:61] "kube-apiserver-bridge-474762" [1b718c38-3f70-484b-9444-75418197ac23] Running
	I0421 20:14:13.939018   73732 system_pods.go:61] "kube-controller-manager-bridge-474762" [92a8935f-63b0-46af-b84a-fee815747ad3] Running
	I0421 20:14:13.939023   73732 system_pods.go:61] "kube-proxy-7m4zl" [2d0cfcb1-bc45-4f18-a39c-008228494bf1] Running
	I0421 20:14:13.939027   73732 system_pods.go:61] "kube-scheduler-bridge-474762" [4bc919f5-fa83-4605-ab1a-c00a5fac7cb9] Running
	I0421 20:14:13.939032   73732 system_pods.go:61] "storage-provisioner" [c610bd1d-e889-464a-a081-c8b8379afe79] Running
	I0421 20:14:13.939039   73732 system_pods.go:74] duration metric: took 182.627504ms to wait for pod list to return data ...
	I0421 20:14:13.939049   73732 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:14:14.134340   73732 default_sa.go:45] found service account: "default"
	I0421 20:14:14.134374   73732 default_sa.go:55] duration metric: took 195.317449ms for default service account to be created ...
	I0421 20:14:14.134387   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:14:14.337437   73732 system_pods.go:86] 7 kube-system pods found
	I0421 20:14:14.337463   73732 system_pods.go:89] "coredns-7db6d8ff4d-s2pv8" [9cda56e7-d4f6-4810-959d-ecfba76f4bd1] Running
	I0421 20:14:14.337468   73732 system_pods.go:89] "etcd-bridge-474762" [181cb621-383f-4ede-b8a3-863219989782] Running
	I0421 20:14:14.337472   73732 system_pods.go:89] "kube-apiserver-bridge-474762" [1b718c38-3f70-484b-9444-75418197ac23] Running
	I0421 20:14:14.337476   73732 system_pods.go:89] "kube-controller-manager-bridge-474762" [92a8935f-63b0-46af-b84a-fee815747ad3] Running
	I0421 20:14:14.337480   73732 system_pods.go:89] "kube-proxy-7m4zl" [2d0cfcb1-bc45-4f18-a39c-008228494bf1] Running
	I0421 20:14:14.337483   73732 system_pods.go:89] "kube-scheduler-bridge-474762" [4bc919f5-fa83-4605-ab1a-c00a5fac7cb9] Running
	I0421 20:14:14.337487   73732 system_pods.go:89] "storage-provisioner" [c610bd1d-e889-464a-a081-c8b8379afe79] Running
	I0421 20:14:14.337493   73732 system_pods.go:126] duration metric: took 203.100247ms to wait for k8s-apps to be running ...
	I0421 20:14:14.337499   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:14:14.337539   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:14:14.353672   73732 system_svc.go:56] duration metric: took 16.166964ms WaitForService to wait for kubelet
	I0421 20:14:14.353694   73732 kubeadm.go:576] duration metric: took 42.080447731s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:14:14.353709   73732 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:14:14.534010   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:14:14.534039   73732 node_conditions.go:123] node cpu capacity is 2
	I0421 20:14:14.534053   73732 node_conditions.go:105] duration metric: took 180.338582ms to run NodePressure ...
	I0421 20:14:14.534077   73732 start.go:240] waiting for startup goroutines ...
	I0421 20:14:14.534090   73732 start.go:245] waiting for cluster config update ...
	I0421 20:14:14.534107   73732 start.go:254] writing updated cluster config ...
	I0421 20:14:14.534423   73732 ssh_runner.go:195] Run: rm -f paused
	I0421 20:14:14.586510   73732 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:14:14.588612   73732 out.go:177] * Done! kubectl is now configured to use "bridge-474762" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.675182028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730463675158389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=799cbeba-6d20-439f-9d64-a6244e1abce2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.676101266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d93b6b72-edf9-4b5a-9cb0-e131090f5906 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.676157040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d93b6b72-edf9-4b5a-9cb0-e131090f5906 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.676332536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d93b6b72-edf9-4b5a-9cb0-e131090f5906 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.725058749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e18e79c-0e5f-4c8d-8434-c24dc5cc8e68 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.725135130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e18e79c-0e5f-4c8d-8434-c24dc5cc8e68 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.726525308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=151ad568-551e-4988-8f5a-189619e2ab9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.727195203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730463727169883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=151ad568-551e-4988-8f5a-189619e2ab9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.727986041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efa35de4-ea2d-40af-93bc-64ba0be615d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.728071379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efa35de4-ea2d-40af-93bc-64ba0be615d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.728249790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efa35de4-ea2d-40af-93bc-64ba0be615d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.776078920Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f66312f-8f67-446d-b26b-aa85b9b9fdd0 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.776187719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f66312f-8f67-446d-b26b-aa85b9b9fdd0 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.778139561Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf8028f4-8a64-4043-89f4-60bb80d82127 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.778762839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730463778734682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf8028f4-8a64-4043-89f4-60bb80d82127 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.780095186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89f8a685-f253-44db-98a8-b3c709b6d572 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.780193787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89f8a685-f253-44db-98a8-b3c709b6d572 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.780403603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89f8a685-f253-44db-98a8-b3c709b6d572 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.821530356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08e1bed7-96ba-4599-bba1-782708325591 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.821693949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08e1bed7-96ba-4599-bba1-782708325591 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.823427498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21e6e017-3445-4178-ad1d-e63d9c55e0a5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.824021215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730463823992368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21e6e017-3445-4178-ad1d-e63d9c55e0a5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.825158259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=749304c9-9240-4e3f-b06f-22fcf55ebda2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.825249826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=749304c9-9240-4e3f-b06f-22fcf55ebda2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:14:23 embed-certs-727235 crio[724]: time="2024-04-21 20:14:23.825431626Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=749304c9-9240-4e3f-b06f-22fcf55ebda2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97ead3853c312       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d1912fd0d8eb3       storage-provisioner
	650fe46c897a4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d1573010f4048       coredns-7db6d8ff4d-b7p8r
	410b67ad10f7c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4de3ba4c06be5       coredns-7db6d8ff4d-mjgjp
	ae051d6fe30b2       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   9eb8f5bc71da1       kube-proxy-zh4fs
	de24f31d2cd03       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   1fe7743526570       etcd-embed-certs-727235
	1d2911b2e722b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   a90c949abbcf0       kube-scheduler-embed-certs-727235
	7e5fa82e60b8f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   3450b6ecd6cbf       kube-controller-manager-embed-certs-727235
	bc553514f919c       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   ef232caeea042       kube-apiserver-embed-certs-727235
	
	
	==> coredns [410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-727235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-727235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=embed-certs-727235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T20_05_05_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 20:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-727235
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:14:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:10:31 +0000   Sun, 21 Apr 2024 20:04:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:10:31 +0000   Sun, 21 Apr 2024 20:04:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:10:31 +0000   Sun, 21 Apr 2024 20:04:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:10:31 +0000   Sun, 21 Apr 2024 20:05:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.9
	  Hostname:    embed-certs-727235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3397c26399140dfa6f25ac1a481f4c8
	  System UUID:                b3397c26-3991-40df-a6f2-5ac1a481f4c8
	  Boot ID:                    a6e1c195-555a-4656-b02f-464345d971da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-b7p8r                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-mjgjp                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-727235                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-727235             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-727235    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-zh4fs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-727235             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-2vwhn               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-727235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-727235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-727235 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-727235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-727235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-727235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-727235 event: Registered Node embed-certs-727235 in Controller
	
	
	==> dmesg <==
	[  +0.044082] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.804092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.565884] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.708468] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.720459] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.063024] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078029] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.204346] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.135931] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.322461] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[Apr21 20:00] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.064746] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.456659] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.628994] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.636557] kauditd_printk_skb: 79 callbacks suppressed
	[Apr21 20:04] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.840638] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[Apr21 20:05] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.115569] systemd-fstab-generator[3968]: Ignoring "noauto" option for root device
	[ +13.990862] systemd-fstab-generator[4172]: Ignoring "noauto" option for root device
	[  +0.090208] kauditd_printk_skb: 14 callbacks suppressed
	[Apr21 20:06] kauditd_printk_skb: 88 callbacks suppressed
	[Apr21 20:12] hrtimer: interrupt took 2576916 ns
	
	
	==> etcd [de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306] <==
	{"level":"info","ts":"2024-04-21T20:10:22.930206Z","caller":"traceutil/trace.go:171","msg":"trace[1461147215] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:729; }","duration":"130.32213ms","start":"2024-04-21T20:10:22.799854Z","end":"2024-04-21T20:10:22.930176Z","steps":["trace[1461147215] 'count revisions from in-memory index tree'  (duration: 129.980538ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:23.982193Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.9615ms","expected-duration":"100ms","prefix":"","request":"header:<ID:206196922829616161 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.9\" mod_revision:722 > success:<request_put:<key:\"/registry/masterleases/192.168.72.9\" value_size:65 lease:206196922829616159 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.9\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-21T20:10:23.982385Z","caller":"traceutil/trace.go:171","msg":"trace[813453418] linearizableReadLoop","detail":"{readStateIndex:805; appliedIndex:804; }","duration":"347.335777ms","start":"2024-04-21T20:10:23.635035Z","end":"2024-04-21T20:10:23.982371Z","steps":["trace[813453418] 'read index received'  (duration: 229.049211ms)","trace[813453418] 'applied index is now lower than readState.Index'  (duration: 118.284837ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T20:10:23.982457Z","caller":"traceutil/trace.go:171","msg":"trace[1897830221] transaction","detail":"{read_only:false; response_revision:730; number_of_response:1; }","duration":"378.039136ms","start":"2024-04-21T20:10:23.604411Z","end":"2024-04-21T20:10:23.98245Z","steps":["trace[1897830221] 'process raft request'  (duration: 259.724005ms)","trace[1897830221] 'compare'  (duration: 117.766033ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:10:23.982549Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:10:23.604395Z","time spent":"378.104578ms","remote":"127.0.0.1:45988","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.72.9\" mod_revision:722 > success:<request_put:<key:\"/registry/masterleases/192.168.72.9\" value_size:65 lease:206196922829616159 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.9\" > >"}
	{"level":"warn","ts":"2024-04-21T20:10:23.982713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.380663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-21T20:10:23.982849Z","caller":"traceutil/trace.go:171","msg":"trace[970421004] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:730; }","duration":"326.541302ms","start":"2024-04-21T20:10:23.656295Z","end":"2024-04-21T20:10:23.982836Z","steps":["trace[970421004] 'agreement among raft nodes before linearized reading'  (duration: 326.18014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:23.983007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:10:23.656281Z","time spent":"326.710807ms","remote":"127.0.0.1:46036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":54,"response size":30,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"warn","ts":"2024-04-21T20:10:23.983076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.036359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:10:23.983133Z","caller":"traceutil/trace.go:171","msg":"trace[1700575669] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:730; }","duration":"348.118724ms","start":"2024-04-21T20:10:23.635003Z","end":"2024-04-21T20:10:23.983122Z","steps":["trace[1700575669] 'agreement among raft nodes before linearized reading'  (duration: 348.045477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:23.983164Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:10:23.634988Z","time spent":"348.169747ms","remote":"127.0.0.1:46160","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-04-21T20:10:59.001026Z","caller":"traceutil/trace.go:171","msg":"trace[1152693257] transaction","detail":"{read_only:false; response_revision:759; number_of_response:1; }","duration":"125.873603ms","start":"2024-04-21T20:10:58.875114Z","end":"2024-04-21T20:10:59.000988Z","steps":["trace[1152693257] 'process raft request'  (duration: 125.71521ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:59.001484Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.245638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:10:59.001632Z","caller":"traceutil/trace.go:171","msg":"trace[1388087734] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:759; }","duration":"117.376124ms","start":"2024-04-21T20:10:58.884174Z","end":"2024-04-21T20:10:59.00155Z","steps":["trace[1388087734] 'agreement among raft nodes before linearized reading'  (duration: 117.203095ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:10:59.001323Z","caller":"traceutil/trace.go:171","msg":"trace[747327944] linearizableReadLoop","detail":"{readStateIndex:841; appliedIndex:841; }","duration":"117.10792ms","start":"2024-04-21T20:10:58.884197Z","end":"2024-04-21T20:10:59.001305Z","steps":["trace[747327944] 'read index received'  (duration: 117.099776ms)","trace[747327944] 'applied index is now lower than readState.Index'  (duration: 7.014µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:10:59.390522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.552307ms","expected-duration":"100ms","prefix":"","request":"header:<ID:206196922829616332 > lease_revoke:<id:02dc8f024309b47f>","response":"size:28"}
	{"level":"info","ts":"2024-04-21T20:10:59.390669Z","caller":"traceutil/trace.go:171","msg":"trace[269270091] linearizableReadLoop","detail":"{readStateIndex:842; appliedIndex:841; }","duration":"387.708901ms","start":"2024-04-21T20:10:59.002946Z","end":"2024-04-21T20:10:59.390655Z","steps":["trace[269270091] 'read index received'  (duration: 66.934401ms)","trace[269270091] 'applied index is now lower than readState.Index'  (duration: 320.773334ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:10:59.390724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.764392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:10:59.390738Z","caller":"traceutil/trace.go:171","msg":"trace[1519781682] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:759; }","duration":"387.808333ms","start":"2024-04-21T20:10:59.002925Z","end":"2024-04-21T20:10:59.390733Z","steps":["trace[1519781682] 'agreement among raft nodes before linearized reading'  (duration: 387.763791ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:59.390789Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:10:59.002912Z","time spent":"387.851178ms","remote":"127.0.0.1:45968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-21T20:12:42.136452Z","caller":"traceutil/trace.go:171","msg":"trace[2027134893] transaction","detail":"{read_only:false; response_revision:842; number_of_response:1; }","duration":"352.967868ms","start":"2024-04-21T20:12:41.783434Z","end":"2024-04-21T20:12:42.136402Z","steps":["trace[2027134893] 'process raft request'  (duration: 352.812507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:12:42.1368Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:12:41.783417Z","time spent":"353.119724ms","remote":"127.0.0.1:46132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:841 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-21T20:12:42.139179Z","caller":"traceutil/trace.go:171","msg":"trace[418760929] linearizableReadLoop","detail":"{readStateIndex:945; appliedIndex:945; }","duration":"252.699186ms","start":"2024-04-21T20:12:41.884849Z","end":"2024-04-21T20:12:42.137548Z","steps":["trace[418760929] 'read index received'  (duration: 252.694133ms)","trace[418760929] 'applied index is now lower than readState.Index'  (duration: 4.085µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:12:42.13937Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.515421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:12:42.13945Z","caller":"traceutil/trace.go:171","msg":"trace[2050415615] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:842; }","duration":"254.620928ms","start":"2024-04-21T20:12:41.884807Z","end":"2024-04-21T20:12:42.139428Z","steps":["trace[2050415615] 'agreement among raft nodes before linearized reading'  (duration: 254.519856ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:14:24 up 14 min,  0 users,  load average: 0.13, 0.25, 0.22
	Linux embed-certs-727235 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f] <==
	I0421 20:08:20.781078       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:10:01.418853       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:10:01.419309       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0421 20:10:02.419909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:10:02.420041       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:10:02.420079       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:10:02.420088       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:10:02.420330       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:10:02.421096       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:11:02.421243       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:11:02.421666       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:11:02.421758       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:11:02.421713       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:11:02.421975       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:11:02.422936       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:13:02.421944       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:13:02.422676       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:13:02.422754       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:13:02.424055       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:13:02.424191       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:13:02.424207       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3] <==
	I0421 20:08:47.436548       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:09:16.994548       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:09:17.448780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:09:47.000992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:09:47.456240       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:10:17.008543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:10:17.469256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:10:47.014068       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:10:47.477234       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:11:13.330889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="555.319µs"
	E0421 20:11:17.021225       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:11:17.487325       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:11:24.328166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="220.417µs"
	E0421 20:11:47.027732       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:11:47.496512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:12:17.036495       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:12:17.508443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:12:47.041823       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:12:47.522796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:13:17.051072       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:13:17.544081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:13:47.056380       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:13:47.554397       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:14:17.062400       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:14:17.565384       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb] <==
	I0421 20:05:18.648882       1 server_linux.go:69] "Using iptables proxy"
	I0421 20:05:18.669011       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.9"]
	I0421 20:05:18.782720       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 20:05:18.782798       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 20:05:18.782822       1 server_linux.go:165] "Using iptables Proxier"
	I0421 20:05:18.786325       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 20:05:18.786508       1 server.go:872] "Version info" version="v1.30.0"
	I0421 20:05:18.786531       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 20:05:18.789709       1 config.go:319] "Starting node config controller"
	I0421 20:05:18.789720       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 20:05:18.789922       1 config.go:192] "Starting service config controller"
	I0421 20:05:18.789932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 20:05:18.789957       1 config.go:101] "Starting endpoint slice config controller"
	I0421 20:05:18.789960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 20:05:18.890628       1 shared_informer.go:320] Caches are synced for service config
	I0421 20:05:18.890695       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 20:05:18.890905       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726] <==
	W0421 20:05:02.393939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 20:05:02.393997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 20:05:02.457222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 20:05:02.457317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 20:05:02.475869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 20:05:02.476549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 20:05:02.476927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.477051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.522692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.522788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.539002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.539247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.566548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.568664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.587525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 20:05:02.587745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 20:05:02.627628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 20:05:02.627753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 20:05:02.775365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 20:05:02.775745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 20:05:02.849790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 20:05:02.849902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 20:05:03.012975       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 20:05:03.013147       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0421 20:05:05.059166       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:12:04 embed-certs-727235 kubelet[3974]: E0421 20:12:04.337976    3974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:12:04 embed-certs-727235 kubelet[3974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:12:04 embed-certs-727235 kubelet[3974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:12:04 embed-certs-727235 kubelet[3974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:12:04 embed-certs-727235 kubelet[3974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:12:15 embed-certs-727235 kubelet[3974]: E0421 20:12:15.308484    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:12:30 embed-certs-727235 kubelet[3974]: E0421 20:12:30.305913    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:12:44 embed-certs-727235 kubelet[3974]: E0421 20:12:44.306924    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:12:55 embed-certs-727235 kubelet[3974]: E0421 20:12:55.307137    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:13:04 embed-certs-727235 kubelet[3974]: E0421 20:13:04.336744    3974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:13:04 embed-certs-727235 kubelet[3974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:13:04 embed-certs-727235 kubelet[3974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:13:04 embed-certs-727235 kubelet[3974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:13:04 embed-certs-727235 kubelet[3974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:13:10 embed-certs-727235 kubelet[3974]: E0421 20:13:10.307070    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:13:24 embed-certs-727235 kubelet[3974]: E0421 20:13:24.307768    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:13:39 embed-certs-727235 kubelet[3974]: E0421 20:13:39.305525    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:13:50 embed-certs-727235 kubelet[3974]: E0421 20:13:50.305626    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:14:03 embed-certs-727235 kubelet[3974]: E0421 20:14:03.305331    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:14:04 embed-certs-727235 kubelet[3974]: E0421 20:14:04.335472    3974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:14:04 embed-certs-727235 kubelet[3974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:14:04 embed-certs-727235 kubelet[3974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:14:04 embed-certs-727235 kubelet[3974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:14:04 embed-certs-727235 kubelet[3974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:14:17 embed-certs-727235 kubelet[3974]: E0421 20:14:17.307079    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	
	
	==> storage-provisioner [97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6] <==
	I0421 20:05:20.339385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 20:05:20.354169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 20:05:20.354371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 20:05:20.367867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 20:05:20.368841       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aeb25cf9-c04b-4331-b76b-6c89e286eace", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-727235_90871dc4-1acd-4fad-8088-3c43628171d2 became leader
	I0421 20:05:20.370120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-727235_90871dc4-1acd-4fad-8088-3c43628171d2!
	I0421 20:05:20.477947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-727235_90871dc4-1acd-4fad-8088-3c43628171d2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-727235 -n embed-certs-727235
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-727235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2vwhn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-727235 describe pod metrics-server-569cc877fc-2vwhn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-727235 describe pod metrics-server-569cc877fc-2vwhn: exit status 1 (58.775506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2vwhn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-727235 describe pod metrics-server-569cc877fc-2vwhn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (137.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
E0421 20:07:09.253345   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.42:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.42:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (253.705068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-867585" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-867585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-867585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.868µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-867585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (251.132166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-867585 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-867585        | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-167454       | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-167454 | jenkins | v1.33.0 | 21 Apr 24 19:43 UTC | 21 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-167454                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-597568                  | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-597568                                   | no-preload-597568            | jenkins | v1.33.0 | 21 Apr 24 19:44 UTC | 21 Apr 24 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-867585             | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC | 21 Apr 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-867585                              | old-k8s-version-867585       | jenkins | v1.33.0 | 21 Apr 24 19:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-595552                           | kubernetes-upgrade-595552    | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:47 UTC |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:47 UTC | 21 Apr 24 19:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-364614             | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-364614                  | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-364614 --memory=2200 --alsologtostderr   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:50 UTC | 21 Apr 24 19:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-364614 image list                           | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p newest-cni-364614                                   | newest-cni-364614            | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-411651 | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:51 UTC |
	|         | disable-driver-mounts-411651                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:51 UTC | 21 Apr 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-727235            | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC | 21 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-727235                 | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-727235                                  | embed-certs-727235           | jenkins | v1.33.0 | 21 Apr 24 19:54 UTC | 21 Apr 24 20:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:54:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:54:52.830637   62197 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:54:52.830912   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.830926   62197 out.go:304] Setting ErrFile to fd 2...
	I0421 19:54:52.830932   62197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:54:52.831126   62197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:54:52.831742   62197 out.go:298] Setting JSON to false
	I0421 19:54:52.832674   62197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5791,"bootTime":1713723502,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:54:52.832739   62197 start.go:139] virtualization: kvm guest
	I0421 19:54:52.835455   62197 out.go:177] * [embed-certs-727235] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:54:52.837412   62197 notify.go:220] Checking for updates...
	I0421 19:54:52.837418   62197 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:54:52.839465   62197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:54:52.841250   62197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:54:52.842894   62197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:54:52.844479   62197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:54:52.845967   62197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:54:52.847931   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:54:52.848387   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.848464   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.864769   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0421 19:54:52.865105   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.865623   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.865642   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.865919   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.866109   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.866305   62197 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:54:52.866589   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.866622   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.880497   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0421 19:54:52.880874   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.881355   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.881380   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.881691   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.881883   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.916395   62197 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 19:54:52.917730   62197 start.go:297] selected driver: kvm2
	I0421 19:54:52.917753   62197 start.go:901] validating driver "kvm2" against &{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.917858   62197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:54:52.918512   62197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.918585   62197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 19:54:52.933446   62197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 19:54:52.933791   62197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:54:52.933845   62197 cni.go:84] Creating CNI manager for ""
	I0421 19:54:52.933858   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:54:52.933901   62197 start.go:340] cluster config:
	{Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:54:52.933981   62197 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:54:52.936907   62197 out.go:177] * Starting "embed-certs-727235" primary control-plane node in "embed-certs-727235" cluster
	I0421 19:54:52.938596   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:54:52.938626   62197 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 19:54:52.938633   62197 cache.go:56] Caching tarball of preloaded images
	I0421 19:54:52.938690   62197 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 19:54:52.938701   62197 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 19:54:52.938791   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:54:52.938960   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:54:52.938995   62197 start.go:364] duration metric: took 19.691µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:54:52.939006   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:54:52.939011   62197 fix.go:54] fixHost starting: 
	I0421 19:54:52.939248   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:54:52.939274   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:54:52.953191   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0421 19:54:52.953602   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:54:52.953994   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:54:52.954024   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:54:52.954454   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:54:52.954602   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.954750   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:54:52.956153   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Running err=<nil>
	W0421 19:54:52.956167   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:54:52.958195   62197 out.go:177] * Updating the running kvm2 "embed-certs-727235" VM ...
	I0421 19:54:52.959459   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:54:52.959476   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:54:52.959678   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:54:52.961705   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962141   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:51:24 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:54:52.962165   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:54:52.962245   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:54:52.962392   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962555   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:54:52.962682   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:54:52.962853   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:54:52.963028   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:54:52.963038   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:54:55.842410   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:58.070842   57617 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.405000958s)
	I0421 19:54:58.070936   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:54:58.089413   57617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:54:58.101786   57617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:54:58.114021   57617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:54:58.114065   57617 kubeadm.go:156] found existing configuration files:
	
	I0421 19:54:58.114126   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0421 19:54:58.124228   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:54:58.124296   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:54:58.135037   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0421 19:54:58.144890   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:54:58.144958   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:54:58.155403   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.165155   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:54:58.165207   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:54:58.175703   57617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0421 19:54:58.185428   57617 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:54:58.185521   57617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:54:58.195328   57617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:54:58.257787   57617 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:54:58.257868   57617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:54:58.432626   57617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:54:58.432766   57617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:54:58.432943   57617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:54:58.677807   57617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:54:58.679655   57617 out.go:204]   - Generating certificates and keys ...
	I0421 19:54:58.679763   57617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:54:58.679856   57617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:54:58.679974   57617 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:54:58.680053   57617 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:54:58.680125   57617 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:54:58.680177   57617 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:54:58.681691   57617 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:54:58.682034   57617 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:54:58.682257   57617 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:54:58.682547   57617 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:54:58.682770   57617 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:54:58.682840   57617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:54:58.938223   57617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:54:58.989244   57617 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:54:59.196060   57617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:54:59.378330   57617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:54:59.435654   57617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:54:59.436159   57617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:54:59.440839   57617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:54:58.914303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:54:59.442694   57617 out.go:204]   - Booting up control plane ...
	I0421 19:54:59.442826   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:54:59.442942   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:54:59.443122   57617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:54:59.466298   57617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:54:59.469370   57617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:54:59.469656   57617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:54:59.622281   57617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:54:59.622433   57617 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:55:00.123513   57617 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.401309ms
	I0421 19:55:00.123606   57617 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:55:05.627324   57617 kubeadm.go:309] [api-check] The API server is healthy after 5.503528473s
	I0421 19:55:05.644392   57617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:55:05.666212   57617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:55:05.696150   57617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:55:05.696487   57617 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-167454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:55:05.709873   57617 kubeadm.go:309] [bootstrap-token] Using token: ypxtpg.5u6l3v2as04iv2aj
	I0421 19:55:05.711407   57617 out.go:204]   - Configuring RBAC rules ...
	I0421 19:55:05.711556   57617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:55:05.721552   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:55:05.735168   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:55:05.739580   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:55:05.743466   57617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:55:05.747854   57617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:55:06.034775   57617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:55:06.468585   57617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:55:07.036924   57617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:55:07.036983   57617 kubeadm.go:309] 
	I0421 19:55:07.037040   57617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:55:07.037060   57617 kubeadm.go:309] 
	I0421 19:55:07.037199   57617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:55:07.037218   57617 kubeadm.go:309] 
	I0421 19:55:07.037259   57617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:55:07.037348   57617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:55:07.037419   57617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:55:07.037433   57617 kubeadm.go:309] 
	I0421 19:55:07.037526   57617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:55:07.037540   57617 kubeadm.go:309] 
	I0421 19:55:07.037604   57617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:55:07.037615   57617 kubeadm.go:309] 
	I0421 19:55:07.037681   57617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:55:07.037760   57617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:55:07.037823   57617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:55:07.037828   57617 kubeadm.go:309] 
	I0421 19:55:07.037899   57617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:55:07.037964   57617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:55:07.037970   57617 kubeadm.go:309] 
	I0421 19:55:07.038098   57617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038255   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 19:55:07.038283   57617 kubeadm.go:309] 	--control-plane 
	I0421 19:55:07.038288   57617 kubeadm.go:309] 
	I0421 19:55:07.038400   57617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:55:07.038411   57617 kubeadm.go:309] 
	I0421 19:55:07.038517   57617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token ypxtpg.5u6l3v2as04iv2aj \
	I0421 19:55:07.038672   57617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 19:55:07.038956   57617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:55:07.038982   57617 cni.go:84] Creating CNI manager for ""
	I0421 19:55:07.038998   57617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 19:55:07.040852   57617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 19:55:04.994338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:07.042257   57617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 19:55:07.057287   57617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 19:55:07.078228   57617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:55:07.078330   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.078390   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-167454 minikube.k8s.io/updated_at=2024_04_21T19_55_07_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=default-k8s-diff-port-167454 minikube.k8s.io/primary=true
	I0421 19:55:07.128726   57617 ops.go:34] apiserver oom_adj: -16
	I0421 19:55:07.277531   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:07.778563   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.066312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:08.278441   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:08.778051   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.277768   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:09.777868   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.278602   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:10.777607   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.278260   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:11.777609   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.277684   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:12.778116   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.146347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:17.218265   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:13.278439   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:13.777901   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.278214   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:14.777957   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.278369   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:15.778113   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.277991   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:16.778322   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.278350   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:17.778144   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.278465   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:18.778049   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.278228   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.777615   57617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:55:19.945015   57617 kubeadm.go:1107] duration metric: took 12.866746923s to wait for elevateKubeSystemPrivileges
	W0421 19:55:19.945062   57617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:55:19.945073   57617 kubeadm.go:393] duration metric: took 5m11.113256567s to StartCluster
	I0421 19:55:19.945094   57617 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.945186   57617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:55:19.947618   57617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:55:19.947919   57617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.23 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 19:55:19.949819   57617 out.go:177] * Verifying Kubernetes components...
	I0421 19:55:19.947983   57617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:55:19.948132   57617 config.go:182] Loaded profile config "default-k8s-diff-port-167454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:55:19.951664   57617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:55:19.951671   57617 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951685   57617 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951708   57617 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-167454"
	I0421 19:55:19.951718   57617 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-167454"
	I0421 19:55:19.951720   57617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-167454"
	W0421 19:55:19.951730   57617 addons.go:243] addon storage-provisioner should already be in state true
	I0421 19:55:19.951741   57617 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.951753   57617 addons.go:243] addon metrics-server should already be in state true
	I0421 19:55:19.951766   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.951781   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.952059   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952095   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952147   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952169   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.952170   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.952378   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.969767   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0421 19:55:19.970291   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.971023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.971053   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.971517   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.971747   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.971966   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0421 19:55:19.972325   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0421 19:55:19.972539   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.972691   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.973050   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973075   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973313   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.973336   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.973408   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973712   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.973986   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974023   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.974287   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.974321   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.976061   57617 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-167454"
	W0421 19:55:19.976086   57617 addons.go:243] addon default-storageclass should already be in state true
	I0421 19:55:19.976116   57617 host.go:66] Checking if "default-k8s-diff-port-167454" exists ...
	I0421 19:55:19.976473   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:19.976513   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:19.989851   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I0421 19:55:19.990053   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0421 19:55:19.990494   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.990573   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:19.991023   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991039   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991170   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:19.991197   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:19.991380   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991527   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:19.991556   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.991713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:19.993398   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995704   57617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:55:19.994181   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:19.995594   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0421 19:55:19.997429   57617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:19.997450   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:55:19.997470   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:19.998995   57617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0421 19:55:19.997642   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.000129   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000728   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.000743   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.000638   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.000805   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 19:55:20.000816   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 19:55:20.000826   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.000991   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.001147   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.001328   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.001340   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.001362   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.001763   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.002313   57617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:55:20.002335   57617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:55:20.003803   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004388   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.004404   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.004602   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.004792   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.004958   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.005128   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.018016   57617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0421 19:55:20.018651   57617 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:55:20.019177   57617 main.go:141] libmachine: Using API Version  1
	I0421 19:55:20.019196   57617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:55:20.019422   57617 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:55:20.019702   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetState
	I0421 19:55:20.021066   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .DriverName
	I0421 19:55:20.021324   57617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.021340   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:55:20.021357   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHHostname
	I0421 19:55:20.024124   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024503   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:16:27", ip: ""} in network mk-default-k8s-diff-port-167454: {Iface:virbr3 ExpiryTime:2024-04-21 20:49:53 +0000 UTC Type:0 Mac:52:54:00:8e:16:27 Iaid: IPaddr:192.168.61.23 Prefix:24 Hostname:default-k8s-diff-port-167454 Clientid:01:52:54:00:8e:16:27}
	I0421 19:55:20.024524   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | domain default-k8s-diff-port-167454 has defined IP address 192.168.61.23 and MAC address 52:54:00:8e:16:27 in network mk-default-k8s-diff-port-167454
	I0421 19:55:20.024686   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHPort
	I0421 19:55:20.024848   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHKeyPath
	I0421 19:55:20.025030   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .GetSSHUsername
	I0421 19:55:20.025184   57617 sshutil.go:53] new ssh client: &{IP:192.168.61.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/default-k8s-diff-port-167454/id_rsa Username:docker}
	I0421 19:55:20.214689   57617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:55:20.264530   57617 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.281976   57617 node_ready.go:49] node "default-k8s-diff-port-167454" has status "Ready":"True"
	I0421 19:55:20.281999   57617 node_ready.go:38] duration metric: took 17.434628ms for node "default-k8s-diff-port-167454" to be "Ready" ...
	I0421 19:55:20.282007   57617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:20.297108   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:20.386102   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:55:20.408686   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 19:55:20.408706   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0421 19:55:20.416022   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:55:20.455756   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 19:55:20.455778   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 19:55:20.603535   57617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.603559   57617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 19:55:20.690543   57617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 19:55:20.842718   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.842753   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843074   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843148   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843163   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.843172   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.843191   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.843475   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.843511   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.843525   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:20.856272   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:20.856294   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:20.856618   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:20.856636   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:20.856673   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550249   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13418491s)
	I0421 19:55:21.550297   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550305   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550577   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550654   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:21.550663   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.550675   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:21.550684   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:21.550928   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:21.550946   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:21.853935   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.853970   57617 pod_ready.go:81] duration metric: took 1.556832657s for pod "coredns-7db6d8ff4d-lbtcm" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.853984   57617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924815   57617 pod_ready.go:92] pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.924845   57617 pod_ready.go:81] duration metric: took 70.852928ms for pod "coredns-7db6d8ff4d-xmhm6" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.924857   57617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955217   57617 pod_ready.go:92] pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.955246   57617 pod_ready.go:81] duration metric: took 30.380253ms for pod "etcd-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.955259   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975065   57617 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.975094   57617 pod_ready.go:81] duration metric: took 19.818959ms for pod "kube-apiserver-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.975106   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981884   57617 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:21.981907   57617 pod_ready.go:81] duration metric: took 6.791796ms for pod "kube-controller-manager-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:21.981919   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.001934   57617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311352362s)
	I0421 19:55:22.001984   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002000   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002311   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002369   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002330   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.002410   57617 main.go:141] libmachine: Making call to close driver server
	I0421 19:55:22.002434   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) Calling .Close
	I0421 19:55:22.002649   57617 main.go:141] libmachine: Successfully made call to close driver server
	I0421 19:55:22.002689   57617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 19:55:22.002705   57617 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-167454"
	I0421 19:55:22.002713   57617 main.go:141] libmachine: (default-k8s-diff-port-167454) DBG | Closing plugin on server side
	I0421 19:55:22.005036   57617 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0421 19:55:22.006362   57617 addons.go:505] duration metric: took 2.058380621s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0421 19:55:22.269772   57617 pod_ready.go:92] pod "kube-proxy-wmv4v" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.269798   57617 pod_ready.go:81] duration metric: took 287.872366ms for pod "kube-proxy-wmv4v" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.269808   57617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668470   57617 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace has status "Ready":"True"
	I0421 19:55:22.668494   57617 pod_ready.go:81] duration metric: took 398.679544ms for pod "kube-scheduler-default-k8s-diff-port-167454" in "kube-system" namespace to be "Ready" ...
	I0421 19:55:22.668502   57617 pod_ready.go:38] duration metric: took 2.386486578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:55:22.668516   57617 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:55:22.668560   57617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:55:22.688191   57617 api_server.go:72] duration metric: took 2.740229162s to wait for apiserver process to appear ...
	I0421 19:55:22.688224   57617 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:55:22.688244   57617 api_server.go:253] Checking apiserver healthz at https://192.168.61.23:8444/healthz ...
	I0421 19:55:22.699424   57617 api_server.go:279] https://192.168.61.23:8444/healthz returned 200:
	ok
	I0421 19:55:22.700614   57617 api_server.go:141] control plane version: v1.30.0
	I0421 19:55:22.700636   57617 api_server.go:131] duration metric: took 12.404937ms to wait for apiserver health ...
	I0421 19:55:22.700643   57617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:55:22.873594   57617 system_pods.go:59] 9 kube-system pods found
	I0421 19:55:22.873622   57617 system_pods.go:61] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:22.873631   57617 system_pods.go:61] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:22.873635   57617 system_pods.go:61] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:22.873639   57617 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:22.873643   57617 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:22.873647   57617 system_pods.go:61] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:22.873651   57617 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:22.873657   57617 system_pods.go:61] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:22.873698   57617 system_pods.go:61] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:22.873717   57617 system_pods.go:74] duration metric: took 173.068164ms to wait for pod list to return data ...
	I0421 19:55:22.873731   57617 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:55:23.068026   57617 default_sa.go:45] found service account: "default"
	I0421 19:55:23.068053   57617 default_sa.go:55] duration metric: took 194.313071ms for default service account to be created ...
	I0421 19:55:23.068064   57617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:55:23.272118   57617 system_pods.go:86] 9 kube-system pods found
	I0421 19:55:23.272148   57617 system_pods.go:89] "coredns-7db6d8ff4d-lbtcm" [1c0a091d-255b-4d65-81b5-5324a00de777] Running
	I0421 19:55:23.272156   57617 system_pods.go:89] "coredns-7db6d8ff4d-xmhm6" [3dbf5552-a097-4fb9-99ac-9119d3b8b4c7] Running
	I0421 19:55:23.272162   57617 system_pods.go:89] "etcd-default-k8s-diff-port-167454" [d21e2de8-8cef-4841-9bb9-03f23fab535e] Running
	I0421 19:55:23.272168   57617 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-167454" [fc1534de-46ec-4f8a-9abd-d3101492b5aa] Running
	I0421 19:55:23.272173   57617 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-167454" [669448e7-63ee-4141-a1e6-cbf051c48919] Running
	I0421 19:55:23.272178   57617 system_pods.go:89] "kube-proxy-wmv4v" [88fe99c0-e9b4-4267-a849-e5de2e9b4e21] Running
	I0421 19:55:23.272184   57617 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-167454" [e6080823-6c14-4e42-b7df-fcfe6d8fc92b] Running
	I0421 19:55:23.272194   57617 system_pods.go:89] "metrics-server-569cc877fc-55czz" [9bd6c32b-2526-40c9-8096-fb9fef26e927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 19:55:23.272200   57617 system_pods.go:89] "storage-provisioner" [59527419-6bed-43ec-afa1-30d8abbbfc4e] Running
	I0421 19:55:23.272212   57617 system_pods.go:126] duration metric: took 204.142116ms to wait for k8s-apps to be running ...
	I0421 19:55:23.272231   57617 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:55:23.272283   57617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:23.288800   57617 system_svc.go:56] duration metric: took 16.572799ms WaitForService to wait for kubelet
	I0421 19:55:23.288829   57617 kubeadm.go:576] duration metric: took 3.340874079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:55:23.288851   57617 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:55:23.469503   57617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:55:23.469541   57617 node_conditions.go:123] node cpu capacity is 2
	I0421 19:55:23.469554   57617 node_conditions.go:105] duration metric: took 180.696423ms to run NodePressure ...
	I0421 19:55:23.469567   57617 start.go:240] waiting for startup goroutines ...
	I0421 19:55:23.469576   57617 start.go:245] waiting for cluster config update ...
	I0421 19:55:23.469590   57617 start.go:254] writing updated cluster config ...
	I0421 19:55:23.469941   57617 ssh_runner.go:195] Run: rm -f paused
	I0421 19:55:23.521989   57617 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:55:23.524030   57617 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-167454" cluster and "default" namespace by default
	I0421 19:55:23.298271   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:29.590689   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:55:29.590767   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:55:29.592377   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:29.592430   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:29.592527   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:29.592662   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:29.592794   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:29.592892   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:29.595022   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:29.595115   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:29.595190   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:29.595263   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:29.595311   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:29.595375   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:29.595433   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:29.595520   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:29.595598   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:29.595680   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:29.595775   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:29.595824   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:29.595875   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:29.595919   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:29.595982   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:29.596047   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:29.596091   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:29.596174   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:29.596256   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:29.596301   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:29.596367   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.598820   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:29.598926   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:29.598993   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:29.599054   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:29.599162   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:29.599331   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:29.599418   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:55:29.599516   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599705   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.599772   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.599936   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600041   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600191   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600244   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600389   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600481   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:55:29.600654   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:55:29.600669   58211 kubeadm.go:309] 
	I0421 19:55:29.600702   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:55:29.600737   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:55:29.600743   58211 kubeadm.go:309] 
	I0421 19:55:29.600777   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:55:29.600810   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:55:29.600901   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:55:29.600908   58211 kubeadm.go:309] 
	I0421 19:55:29.601009   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:55:29.601057   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:55:29.601109   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:55:29.601118   58211 kubeadm.go:309] 
	I0421 19:55:29.601224   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:55:29.601323   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:55:29.601333   58211 kubeadm.go:309] 
	I0421 19:55:29.601485   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:55:29.601579   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:55:29.601646   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:55:29.601751   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:55:29.601835   58211 kubeadm.go:309] 
	W0421 19:55:29.601862   58211 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0421 19:55:29.601908   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 19:55:30.075850   58211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:55:30.092432   58211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:55:30.103405   58211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:55:30.103429   58211 kubeadm.go:156] found existing configuration files:
	
	I0421 19:55:30.103473   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:55:30.114018   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:55:30.114073   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:55:30.124410   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:55:30.134021   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:55:30.134076   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:55:30.143946   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.153926   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:55:30.153973   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:55:30.164013   58211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:55:30.173459   58211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:55:30.173512   58211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:55:30.184067   58211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:55:30.259108   58211 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0421 19:55:30.259195   58211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:55:30.422144   58211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:55:30.422317   58211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:55:30.422497   58211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:55:30.619194   58211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:55:30.621135   58211 out.go:204]   - Generating certificates and keys ...
	I0421 19:55:30.621258   58211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:55:30.621314   58211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:55:30.621437   58211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 19:55:30.621530   58211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 19:55:30.621617   58211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 19:55:30.621956   58211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 19:55:30.622478   58211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 19:55:30.623068   58211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 19:55:30.623509   58211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 19:55:30.624072   58211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 19:55:30.624110   58211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 19:55:30.624183   58211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:55:30.871049   58211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:55:30.931466   58211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:55:31.088680   58211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:55:31.275358   58211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:55:31.305344   58211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:55:31.307220   58211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:55:31.307289   58211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:55:31.484365   58211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:55:29.378329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:32.450259   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:31.486164   58211 out.go:204]   - Booting up control plane ...
	I0421 19:55:31.486312   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:55:31.492868   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:55:31.494787   58211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:55:31.496104   58211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:55:31.500190   58211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0421 19:55:38.530370   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:41.602365   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:47.682316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:50.754312   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:56.834318   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:55:59.906313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:05.986294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:09.058300   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:11.503250   58211 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0421 19:56:11.503361   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:11.503618   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:15.138313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:16.504469   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:16.504743   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:18.210376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:24.290344   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:27.366276   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:26.505496   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:26.505769   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:33.442294   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:36.514319   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:42.594275   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:45.670298   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:46.505851   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:56:46.506140   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:56:51.746306   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:56:54.818338   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:00.898357   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:03.974324   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:10.050360   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:13.122376   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:19.202341   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:22.274304   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:26.505043   58211 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0421 19:57:26.505356   58211 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0421 19:57:26.505385   58211 kubeadm.go:309] 
	I0421 19:57:26.505436   58211 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0421 19:57:26.505495   58211 kubeadm.go:309] 		timed out waiting for the condition
	I0421 19:57:26.505505   58211 kubeadm.go:309] 
	I0421 19:57:26.505553   58211 kubeadm.go:309] 	This error is likely caused by:
	I0421 19:57:26.505596   58211 kubeadm.go:309] 		- The kubelet is not running
	I0421 19:57:26.505720   58211 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0421 19:57:26.505730   58211 kubeadm.go:309] 
	I0421 19:57:26.505839   58211 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0421 19:57:26.505883   58211 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0421 19:57:26.505912   58211 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0421 19:57:26.505919   58211 kubeadm.go:309] 
	I0421 19:57:26.506020   58211 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0421 19:57:26.506152   58211 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0421 19:57:26.506181   58211 kubeadm.go:309] 
	I0421 19:57:26.506346   58211 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0421 19:57:26.506480   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0421 19:57:26.506581   58211 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0421 19:57:26.506702   58211 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0421 19:57:26.506721   58211 kubeadm.go:309] 
	I0421 19:57:26.507115   58211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:57:26.507237   58211 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0421 19:57:26.507330   58211 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0421 19:57:26.507409   58211 kubeadm.go:393] duration metric: took 8m0.981544676s to StartCluster
	I0421 19:57:26.507461   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0421 19:57:26.507523   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0421 19:57:26.556647   58211 cri.go:89] found id: ""
	I0421 19:57:26.556676   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.556687   58211 logs.go:278] No container was found matching "kube-apiserver"
	I0421 19:57:26.556695   58211 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0421 19:57:26.556748   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0421 19:57:26.595025   58211 cri.go:89] found id: ""
	I0421 19:57:26.595055   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.595064   58211 logs.go:278] No container was found matching "etcd"
	I0421 19:57:26.595069   58211 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0421 19:57:26.595143   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0421 19:57:26.634084   58211 cri.go:89] found id: ""
	I0421 19:57:26.634115   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.634126   58211 logs.go:278] No container was found matching "coredns"
	I0421 19:57:26.634134   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0421 19:57:26.634201   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0421 19:57:26.672409   58211 cri.go:89] found id: ""
	I0421 19:57:26.672439   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.672450   58211 logs.go:278] No container was found matching "kube-scheduler"
	I0421 19:57:26.672458   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0421 19:57:26.672515   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0421 19:57:26.720123   58211 cri.go:89] found id: ""
	I0421 19:57:26.720151   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.720159   58211 logs.go:278] No container was found matching "kube-proxy"
	I0421 19:57:26.720165   58211 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0421 19:57:26.720219   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0421 19:57:26.756889   58211 cri.go:89] found id: ""
	I0421 19:57:26.756918   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.756929   58211 logs.go:278] No container was found matching "kube-controller-manager"
	I0421 19:57:26.756936   58211 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0421 19:57:26.757044   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0421 19:57:26.802160   58211 cri.go:89] found id: ""
	I0421 19:57:26.802188   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.802197   58211 logs.go:278] No container was found matching "kindnet"
	I0421 19:57:26.802204   58211 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0421 19:57:26.802264   58211 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0421 19:57:26.841543   58211 cri.go:89] found id: ""
	I0421 19:57:26.841567   58211 logs.go:276] 0 containers: []
	W0421 19:57:26.841574   58211 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0421 19:57:26.841583   58211 logs.go:123] Gathering logs for kubelet ...
	I0421 19:57:26.841598   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0421 19:57:26.894547   58211 logs.go:123] Gathering logs for dmesg ...
	I0421 19:57:26.894575   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0421 19:57:26.909052   58211 logs.go:123] Gathering logs for describe nodes ...
	I0421 19:57:26.909077   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0421 19:57:27.002127   58211 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0421 19:57:27.002150   58211 logs.go:123] Gathering logs for CRI-O ...
	I0421 19:57:27.002166   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0421 19:57:27.120460   58211 logs.go:123] Gathering logs for container status ...
	I0421 19:57:27.120494   58211 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0421 19:57:27.170858   58211 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0421 19:57:27.170914   58211 out.go:239] * 
	W0421 19:57:27.170969   58211 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.170990   58211 out.go:239] * 
	W0421 19:57:27.171868   58211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 19:57:27.174893   58211 out.go:177] 
	W0421 19:57:27.176215   58211 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0421 19:57:27.176288   58211 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0421 19:57:27.176319   58211 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0421 19:57:27.177779   58211 out.go:177] 
	I0421 19:57:28.354287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:31.426307   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:37.506302   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:40.578329   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:46.658286   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:49.730290   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:55.810303   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:57:58.882287   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:04.962316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:08.038328   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:14.114282   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:17.186379   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:23.270347   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:26.338313   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:32.418266   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:35.494377   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:41.570277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:44.642263   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:50.722316   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:53.794367   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:58:59.874261   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:02.946333   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:09.026296   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:12.098331   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:18.178280   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:21.250268   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:27.330277   62197 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.9:22: connect: no route to host
	I0421 19:59:30.331351   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:30.331383   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331744   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:30.331770   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:30.331983   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:30.333880   62197 machine.go:97] duration metric: took 4m37.374404361s to provisionDockerMachine
	I0421 19:59:30.333921   62197 fix.go:56] duration metric: took 4m37.394910099s for fixHost
	I0421 19:59:30.333928   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 4m37.394926037s
	W0421 19:59:30.333945   62197 start.go:713] error starting host: provision: host is not running
	W0421 19:59:30.334039   62197 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0421 19:59:30.334070   62197 start.go:728] Will try again in 5 seconds ...
	I0421 19:59:35.335761   62197 start.go:360] acquireMachinesLock for embed-certs-727235: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:59:35.335860   62197 start.go:364] duration metric: took 61.013µs to acquireMachinesLock for "embed-certs-727235"
	I0421 19:59:35.335882   62197 start.go:96] Skipping create...Using existing machine configuration
	I0421 19:59:35.335890   62197 fix.go:54] fixHost starting: 
	I0421 19:59:35.336171   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:59:35.336191   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:59:35.351703   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0421 19:59:35.352186   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:59:35.352723   62197 main.go:141] libmachine: Using API Version  1
	I0421 19:59:35.352752   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:59:35.353060   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:59:35.353252   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:35.353458   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 19:59:35.355260   62197 fix.go:112] recreateIfNeeded on embed-certs-727235: state=Stopped err=<nil>
	I0421 19:59:35.355290   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	W0421 19:59:35.355474   62197 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 19:59:35.357145   62197 out.go:177] * Restarting existing kvm2 VM for "embed-certs-727235" ...
	I0421 19:59:35.358345   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Start
	I0421 19:59:35.358510   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring networks are active...
	I0421 19:59:35.359250   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network default is active
	I0421 19:59:35.359533   62197 main.go:141] libmachine: (embed-certs-727235) Ensuring network mk-embed-certs-727235 is active
	I0421 19:59:35.359951   62197 main.go:141] libmachine: (embed-certs-727235) Getting domain xml...
	I0421 19:59:35.360663   62197 main.go:141] libmachine: (embed-certs-727235) Creating domain...
	I0421 19:59:36.615174   62197 main.go:141] libmachine: (embed-certs-727235) Waiting to get IP...
	I0421 19:59:36.615997   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.616369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.616421   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.616351   63337 retry.go:31] will retry after 283.711872ms: waiting for machine to come up
	I0421 19:59:36.902032   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:36.902618   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:36.902655   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:36.902566   63337 retry.go:31] will retry after 336.383022ms: waiting for machine to come up
	I0421 19:59:37.240117   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.240613   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.240637   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.240565   63337 retry.go:31] will retry after 468.409378ms: waiting for machine to come up
	I0421 19:59:37.711065   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:37.711526   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:37.711556   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:37.711481   63337 retry.go:31] will retry after 457.618649ms: waiting for machine to come up
	I0421 19:59:38.170991   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.171513   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.171542   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.171450   63337 retry.go:31] will retry after 756.497464ms: waiting for machine to come up
	I0421 19:59:38.929950   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:38.930460   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:38.930495   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:38.930406   63337 retry.go:31] will retry after 667.654845ms: waiting for machine to come up
	I0421 19:59:39.599112   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:39.599566   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:39.599595   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:39.599514   63337 retry.go:31] will retry after 862.586366ms: waiting for machine to come up
	I0421 19:59:40.463709   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:40.464277   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:40.464311   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:40.464216   63337 retry.go:31] will retry after 1.446407672s: waiting for machine to come up
	I0421 19:59:41.912470   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:41.912935   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:41.912967   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:41.912893   63337 retry.go:31] will retry after 1.78143514s: waiting for machine to come up
	I0421 19:59:43.695369   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:43.695781   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:43.695818   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:43.695761   63337 retry.go:31] will retry after 1.850669352s: waiting for machine to come up
	I0421 19:59:45.547626   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:45.548119   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:45.548147   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:45.548063   63337 retry.go:31] will retry after 2.399567648s: waiting for machine to come up
	I0421 19:59:47.949884   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:47.950410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:47.950435   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:47.950371   63337 retry.go:31] will retry after 2.461886164s: waiting for machine to come up
	I0421 19:59:50.413594   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:50.414039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | unable to find current IP address of domain embed-certs-727235 in network mk-embed-certs-727235
	I0421 19:59:50.414075   62197 main.go:141] libmachine: (embed-certs-727235) DBG | I0421 19:59:50.413995   63337 retry.go:31] will retry after 3.706995804s: waiting for machine to come up
	I0421 19:59:54.123715   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124155   62197 main.go:141] libmachine: (embed-certs-727235) Found IP for machine: 192.168.72.9
	I0421 19:59:54.124185   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has current primary IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.124194   62197 main.go:141] libmachine: (embed-certs-727235) Reserving static IP address...
	I0421 19:59:54.124657   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.124687   62197 main.go:141] libmachine: (embed-certs-727235) Reserved static IP address: 192.168.72.9
	I0421 19:59:54.124708   62197 main.go:141] libmachine: (embed-certs-727235) DBG | skip adding static IP to network mk-embed-certs-727235 - found existing host DHCP lease matching {name: "embed-certs-727235", mac: "52:54:00:9c:43:7c", ip: "192.168.72.9"}
	I0421 19:59:54.124723   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Getting to WaitForSSH function...
	I0421 19:59:54.124737   62197 main.go:141] libmachine: (embed-certs-727235) Waiting for SSH to be available...
	I0421 19:59:54.126889   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127295   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.127327   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.127410   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH client type: external
	I0421 19:59:54.127437   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa (-rw-------)
	I0421 19:59:54.127483   62197 main.go:141] libmachine: (embed-certs-727235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 19:59:54.127502   62197 main.go:141] libmachine: (embed-certs-727235) DBG | About to run SSH command:
	I0421 19:59:54.127521   62197 main.go:141] libmachine: (embed-certs-727235) DBG | exit 0
	I0421 19:59:54.254733   62197 main.go:141] libmachine: (embed-certs-727235) DBG | SSH cmd err, output: <nil>: 
	I0421 19:59:54.255110   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetConfigRaw
	I0421 19:59:54.255772   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.258448   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.258834   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.258858   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.259128   62197 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/config.json ...
	I0421 19:59:54.259326   62197 machine.go:94] provisionDockerMachine start ...
	I0421 19:59:54.259348   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:54.259572   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.262235   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262666   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.262695   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.262773   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.262946   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.263307   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.263484   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.263693   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.263712   62197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:59:54.379098   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:59:54.379135   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379445   62197 buildroot.go:166] provisioning hostname "embed-certs-727235"
	I0421 19:59:54.379477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.379680   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.382614   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383064   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.383095   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.383211   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.383422   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383585   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.383748   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.383896   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.384121   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.384147   62197 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-727235 && echo "embed-certs-727235" | sudo tee /etc/hostname
	I0421 19:59:54.511915   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-727235
	
	I0421 19:59:54.511944   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.515093   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515475   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.515508   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.515663   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.515865   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516024   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.516131   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.516275   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:54.516436   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:54.516452   62197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-727235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-727235/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-727235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:59:54.638386   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:59:54.638426   62197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 19:59:54.638450   62197 buildroot.go:174] setting up certificates
	I0421 19:59:54.638460   62197 provision.go:84] configureAuth start
	I0421 19:59:54.638468   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetMachineName
	I0421 19:59:54.638764   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:54.641718   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642039   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.642084   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.642297   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.644790   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645154   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.645182   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.645300   62197 provision.go:143] copyHostCerts
	I0421 19:59:54.645353   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 19:59:54.645363   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 19:59:54.645423   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 19:59:54.645506   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 19:59:54.645514   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 19:59:54.645535   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 19:59:54.645587   62197 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 19:59:54.645594   62197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 19:59:54.645613   62197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 19:59:54.645658   62197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.embed-certs-727235 san=[127.0.0.1 192.168.72.9 embed-certs-727235 localhost minikube]
	I0421 19:59:54.847892   62197 provision.go:177] copyRemoteCerts
	I0421 19:59:54.847950   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:59:54.847974   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:54.850561   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.850885   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:54.850916   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:54.851070   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:54.851261   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:54.851408   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:54.851542   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:54.939705   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0421 19:59:54.969564   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:59:54.996643   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:59:55.023261   62197 provision.go:87] duration metric: took 384.790427ms to configureAuth
	I0421 19:59:55.023285   62197 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:59:55.023469   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:59:55.023553   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.026429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026817   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.026851   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.026984   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.027176   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027309   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.027438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.027605   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.027807   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.027831   62197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 19:59:55.329921   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 19:59:55.329950   62197 machine.go:97] duration metric: took 1.070609599s to provisionDockerMachine
	I0421 19:59:55.329967   62197 start.go:293] postStartSetup for "embed-certs-727235" (driver="kvm2")
	I0421 19:59:55.329986   62197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:59:55.330007   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.330422   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:59:55.330455   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.333062   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333429   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.333463   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.333642   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.333820   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.333973   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.334132   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.422655   62197 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:59:55.428020   62197 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:59:55.428049   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 19:59:55.428128   62197 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 19:59:55.428222   62197 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 19:59:55.428344   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:59:55.439964   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 19:59:55.469927   62197 start.go:296] duration metric: took 139.939886ms for postStartSetup
	I0421 19:59:55.469977   62197 fix.go:56] duration metric: took 20.134086048s for fixHost
	I0421 19:59:55.469997   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.472590   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.472954   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.472986   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.473194   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.473438   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473616   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.473813   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.473993   62197 main.go:141] libmachine: Using SSH client type: native
	I0421 19:59:55.474209   62197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0421 19:59:55.474220   62197 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:59:55.583326   62197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713729595.559945159
	
	I0421 19:59:55.583347   62197 fix.go:216] guest clock: 1713729595.559945159
	I0421 19:59:55.583358   62197 fix.go:229] Guest: 2024-04-21 19:59:55.559945159 +0000 UTC Remote: 2024-04-21 19:59:55.469982444 +0000 UTC m=+302.687162567 (delta=89.962715ms)
	I0421 19:59:55.583413   62197 fix.go:200] guest clock delta is within tolerance: 89.962715ms
	I0421 19:59:55.583420   62197 start.go:83] releasing machines lock for "embed-certs-727235", held for 20.24754889s
	I0421 19:59:55.583466   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.583763   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:55.586342   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586700   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.586726   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.586824   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587277   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587477   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 19:59:55.587559   62197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:59:55.587601   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.587683   62197 ssh_runner.go:195] Run: cat /version.json
	I0421 19:59:55.587721   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 19:59:55.590094   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590379   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590476   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590505   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590641   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590721   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:55.590747   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:55.590817   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.590888   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 19:59:55.590972   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591052   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 19:59:55.591128   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.591172   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 19:59:55.591276   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 19:59:55.676275   62197 ssh_runner.go:195] Run: systemctl --version
	I0421 19:59:55.700845   62197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 19:59:55.849591   62197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:59:55.856384   62197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:59:55.856444   62197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:59:55.875575   62197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:59:55.875602   62197 start.go:494] detecting cgroup driver to use...
	I0421 19:59:55.875686   62197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:59:55.892497   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:59:55.907596   62197 docker.go:217] disabling cri-docker service (if available) ...
	I0421 19:59:55.907660   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 19:59:55.922805   62197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 19:59:55.938117   62197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 19:59:56.064198   62197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 19:59:56.239132   62197 docker.go:233] disabling docker service ...
	I0421 19:59:56.239210   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 19:59:56.256188   62197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 19:59:56.271951   62197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 19:59:56.409651   62197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 19:59:56.545020   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 19:59:56.560474   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:59:56.581091   62197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 19:59:56.581170   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.591783   62197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 19:59:56.591853   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.602656   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.613491   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.624452   62197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:59:56.635277   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.646299   62197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.665973   62197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 19:59:56.677014   62197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:59:56.687289   62197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 19:59:56.687340   62197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 19:59:56.702507   62197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:59:56.723008   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:59:56.879595   62197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 19:59:57.034078   62197 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 19:59:57.034150   62197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 19:59:57.039565   62197 start.go:562] Will wait 60s for crictl version
	I0421 19:59:57.039621   62197 ssh_runner.go:195] Run: which crictl
	I0421 19:59:57.044006   62197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:59:57.089252   62197 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 19:59:57.089340   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.121283   62197 ssh_runner.go:195] Run: crio --version
	I0421 19:59:57.160334   62197 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 19:59:57.161976   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetIP
	I0421 19:59:57.164827   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165288   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 19:59:57.165321   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 19:59:57.165536   62197 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0421 19:59:57.170481   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:59:57.185488   62197 kubeadm.go:877] updating cluster {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-
727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:59:57.185682   62197 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 19:59:57.185736   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 19:59:57.237246   62197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 19:59:57.237303   62197 ssh_runner.go:195] Run: which lz4
	I0421 19:59:57.241760   62197 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 19:59:57.246777   62197 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:59:57.246817   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 19:59:58.900652   62197 crio.go:462] duration metric: took 1.658935699s to copy over tarball
	I0421 19:59:58.900742   62197 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:00:01.517236   62197 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.616462501s)
	I0421 20:00:01.517268   62197 crio.go:469] duration metric: took 2.616589126s to extract the tarball
	I0421 20:00:01.517279   62197 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:00:01.560109   62197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:00:01.610448   62197 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:00:01.610476   62197 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:00:01.610484   62197 kubeadm.go:928] updating node { 192.168.72.9 8443 v1.30.0 crio true true} ...
	I0421 20:00:01.610605   62197 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-727235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:00:01.610711   62197 ssh_runner.go:195] Run: crio config
	I0421 20:00:01.670151   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:01.670176   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:01.670188   62197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:00:01.670210   62197 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.9 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-727235 NodeName:embed-certs-727235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:00:01.670391   62197 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-727235"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:00:01.670479   62197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:00:01.683795   62197 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:00:01.683876   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:00:01.696350   62197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0421 20:00:01.717795   62197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:00:01.739491   62197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0421 20:00:01.761288   62197 ssh_runner.go:195] Run: grep 192.168.72.9	control-plane.minikube.internal$ /etc/hosts
	I0421 20:00:01.766285   62197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:00:01.781727   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:00:01.913030   62197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:00:01.934347   62197 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235 for IP: 192.168.72.9
	I0421 20:00:01.934375   62197 certs.go:194] generating shared ca certs ...
	I0421 20:00:01.934395   62197 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:00:01.934541   62197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:00:01.934615   62197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:00:01.934630   62197 certs.go:256] generating profile certs ...
	I0421 20:00:01.934729   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/client.key
	I0421 20:00:01.934796   62197 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key.2840921d
	I0421 20:00:01.934854   62197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key
	I0421 20:00:01.934994   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:00:01.935032   62197 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:00:01.935045   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:00:01.935078   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:00:01.935110   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:00:01.935141   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:00:01.935197   62197 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:00:01.936087   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:00:01.967117   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:00:02.003800   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:00:02.048029   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:00:02.089245   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0421 20:00:02.125745   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:00:02.163109   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:00:02.196506   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/embed-certs-727235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:00:02.229323   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:00:02.260648   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:00:02.290829   62197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:00:02.322222   62197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:00:02.344701   62197 ssh_runner.go:195] Run: openssl version
	I0421 20:00:02.352355   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:00:02.366812   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372857   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.372947   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:00:02.380616   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:00:02.395933   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:00:02.411591   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418090   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.418172   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:00:02.425721   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:00:02.443203   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:00:02.458442   62197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464317   62197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.464386   62197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:00:02.471351   62197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:00:02.484925   62197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:00:02.491028   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 20:00:02.498970   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 20:00:02.506460   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 20:00:02.514257   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 20:00:02.521253   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 20:00:02.528828   62197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 20:00:02.537353   62197 kubeadm.go:391] StartCluster: {Name:embed-certs-727235 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-727
235 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:00:02.537443   62197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:00:02.537495   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.587891   62197 cri.go:89] found id: ""
	I0421 20:00:02.587996   62197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0421 20:00:02.601571   62197 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 20:00:02.601600   62197 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 20:00:02.601606   62197 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 20:00:02.601658   62197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 20:00:02.616596   62197 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:00:02.617728   62197 kubeconfig.go:125] found "embed-certs-727235" server: "https://192.168.72.9:8443"
	I0421 20:00:02.619968   62197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:00:02.634565   62197 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.9
	I0421 20:00:02.634618   62197 kubeadm.go:1154] stopping kube-system containers ...
	I0421 20:00:02.634633   62197 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0421 20:00:02.634699   62197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:00:02.685251   62197 cri.go:89] found id: ""
	I0421 20:00:02.685329   62197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 20:00:02.707720   62197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:00:02.722037   62197 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:00:02.722082   62197 kubeadm.go:156] found existing configuration files:
	
	I0421 20:00:02.722140   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:00:02.735544   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:00:02.735610   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:00:02.748027   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:00:02.759766   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:00:02.759841   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:00:02.773350   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.787463   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:00:02.787519   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:00:02.802575   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:00:02.816988   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:00:02.817045   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:00:02.830215   62197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:00:02.843407   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:03.501684   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.207411   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.448982   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.525835   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:04.656875   62197 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:00:04.656964   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.157388   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.657897   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:00:05.717895   62197 api_server.go:72] duration metric: took 1.061019387s to wait for apiserver process to appear ...
	I0421 20:00:05.717929   62197 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:00:05.717953   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:05.718558   62197 api_server.go:269] stopped: https://192.168.72.9:8443/healthz: Get "https://192.168.72.9:8443/healthz": dial tcp 192.168.72.9:8443: connect: connection refused
	I0421 20:00:06.218281   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.703744   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.703789   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.703806   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.722219   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.722249   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:08.722265   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:08.733030   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:00:08.733061   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:00:09.218765   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.224083   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.224115   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:09.718435   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:09.726603   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:00:09.726629   62197 api_server.go:103] status: https://192.168.72.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:00:10.218162   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:00:10.224240   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 200:
	ok
	I0421 20:00:10.235750   62197 api_server.go:141] control plane version: v1.30.0
	I0421 20:00:10.235778   62197 api_server.go:131] duration metric: took 4.517842889s to wait for apiserver health ...
	I0421 20:00:10.235787   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:00:10.235793   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:00:10.237625   62197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:00:10.239279   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:00:10.262918   62197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:00:10.297402   62197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:00:10.310749   62197 system_pods.go:59] 8 kube-system pods found
	I0421 20:00:10.310805   62197 system_pods.go:61] "coredns-7db6d8ff4d-52bft" [85facf66-ffda-447c-8a04-ac95ac842470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0421 20:00:10.310818   62197 system_pods.go:61] "etcd-embed-certs-727235" [e7031073-0e50-431e-ab67-eda1fa4b9f18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0421 20:00:10.310833   62197 system_pods.go:61] "kube-apiserver-embed-certs-727235" [28be3882-5790-4754-9ef6-ec8f71367757] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0421 20:00:10.310847   62197 system_pods.go:61] "kube-controller-manager-embed-certs-727235" [83da56c1-3479-47f0-936f-ef9d0e4f455d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0421 20:00:10.310854   62197 system_pods.go:61] "kube-proxy-djqh8" [307fa1e9-345f-49b9-85e5-7b20b3275b45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0421 20:00:10.310865   62197 system_pods.go:61] "kube-scheduler-embed-certs-727235" [096371b2-a9b9-4867-a7da-b540432a973b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0421 20:00:10.310884   62197 system_pods.go:61] "metrics-server-569cc877fc-959cd" [146c80ec-6ae0-4ba3-b4be-df99fbf010a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:00:10.310901   62197 system_pods.go:61] "storage-provisioner" [054513d7-51f3-40eb-b875-b73d16c7405b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0421 20:00:10.310913   62197 system_pods.go:74] duration metric: took 13.478482ms to wait for pod list to return data ...
	I0421 20:00:10.310928   62197 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:00:10.315131   62197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:00:10.315170   62197 node_conditions.go:123] node cpu capacity is 2
	I0421 20:00:10.315187   62197 node_conditions.go:105] duration metric: took 4.252168ms to run NodePressure ...
	I0421 20:00:10.315210   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:00:10.620925   62197 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628865   62197 kubeadm.go:733] kubelet initialised
	I0421 20:00:10.628891   62197 kubeadm.go:734] duration metric: took 7.942591ms waiting for restarted kubelet to initialise ...
	I0421 20:00:10.628899   62197 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:00:10.635290   62197 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:12.642618   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:14.648309   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:16.143559   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:16.143590   62197 pod_ready.go:81] duration metric: took 5.508275049s for pod "coredns-7db6d8ff4d-52bft" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:16.143602   62197 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:18.151189   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:20.152541   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.153814   62197 pod_ready.go:102] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:22.649883   62197 pod_ready.go:92] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.649903   62197 pod_ready.go:81] duration metric: took 6.506293522s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.649912   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655444   62197 pod_ready.go:92] pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.655460   62197 pod_ready.go:81] duration metric: took 5.541421ms for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.655468   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660078   62197 pod_ready.go:92] pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.660094   62197 pod_ready.go:81] duration metric: took 4.62017ms for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.660102   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664789   62197 pod_ready.go:92] pod "kube-proxy-djqh8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.664808   62197 pod_ready.go:81] duration metric: took 4.700876ms for pod "kube-proxy-djqh8" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.664816   62197 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668836   62197 pod_ready.go:92] pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:00:22.668851   62197 pod_ready.go:81] duration metric: took 4.029823ms for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:22.668858   62197 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" ...
	I0421 20:00:24.676797   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:26.678669   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:29.175261   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:31.176580   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:33.677232   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:36.176401   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:38.678477   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:40.679096   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:43.178439   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:45.675906   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:47.676304   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:49.678715   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:52.176666   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:54.177353   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:56.677078   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:00:58.680937   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:01.175866   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:03.177322   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:05.676551   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:08.176504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:10.675324   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:12.679609   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:15.177636   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:17.177938   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:19.676849   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:21.677530   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:23.679352   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:26.176177   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:28.676123   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:30.677770   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:33.176672   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:35.675473   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:37.676094   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:40.177351   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:42.675765   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:44.677504   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:47.178728   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:49.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:51.676977   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:53.677967   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:56.177161   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:01:58.675893   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:00.676490   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:03.175994   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:05.676919   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:08.176147   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:10.676394   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:13.176425   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:15.178380   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:17.677109   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:20.174895   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:22.176464   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:24.177654   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:26.675586   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:28.676639   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:31.176664   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:33.677030   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:36.176792   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:38.176920   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:40.180665   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:42.678395   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:45.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:47.675740   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:49.676127   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:52.179886   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:54.675602   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:56.677577   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:02:58.681540   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:01.179494   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:03.676002   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:06.178560   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:08.676363   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:11.176044   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:13.176852   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:15.676011   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:17.678133   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:20.177064   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:22.676179   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:25.176206   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:27.176706   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:29.177019   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:31.677239   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:33.679396   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:36.176193   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:38.176619   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:40.676129   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:42.677052   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:44.679521   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:47.175636   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:49.176114   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:51.676482   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:54.176228   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:56.675340   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:03:58.676581   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:01.175469   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:03.675918   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:05.677443   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:08.175700   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:10.175971   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:12.176364   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:14.675544   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:16.677069   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:19.178329   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:21.677217   62197 pod_ready.go:102] pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace has status "Ready":"False"
	I0421 20:04:22.669233   62197 pod_ready.go:81] duration metric: took 4m0.000357215s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" ...
	E0421 20:04:22.669279   62197 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-959cd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0421 20:04:22.669298   62197 pod_ready.go:38] duration metric: took 4m12.040390946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:04:22.669328   62197 kubeadm.go:591] duration metric: took 4m20.067715018s to restartPrimaryControlPlane
	W0421 20:04:22.669388   62197 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0421 20:04:22.669420   62197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0421 20:04:55.622547   62197 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.953103457s)
	I0421 20:04:55.622619   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:04:55.642562   62197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:04:55.656647   62197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:04:55.669601   62197 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:04:55.669634   62197 kubeadm.go:156] found existing configuration files:
	
	I0421 20:04:55.669698   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:04:55.681786   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:04:55.681877   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:04:55.693186   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:04:55.704426   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:04:55.704498   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:04:55.715698   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:04:55.726902   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:04:55.726963   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:04:55.737702   62197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:04:55.747525   62197 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:04:55.747578   62197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:04:55.758189   62197 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:04:55.822641   62197 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:04:55.822744   62197 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:04:55.980743   62197 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:04:55.980861   62197 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:04:55.980970   62197 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:04:56.253377   62197 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:04:56.255499   62197 out.go:204]   - Generating certificates and keys ...
	I0421 20:04:56.255617   62197 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:04:56.255700   62197 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:04:56.255804   62197 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 20:04:56.255884   62197 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0421 20:04:56.256006   62197 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0421 20:04:56.256106   62197 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0421 20:04:56.256207   62197 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0421 20:04:56.256308   62197 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0421 20:04:56.256402   62197 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 20:04:56.256509   62197 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 20:04:56.256566   62197 kubeadm.go:309] [certs] Using the existing "sa" key
	I0421 20:04:56.256644   62197 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:04:56.437649   62197 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:04:56.650553   62197 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:04:57.060706   62197 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:04:57.174098   62197 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:04:57.367997   62197 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:04:57.368680   62197 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:04:57.371654   62197 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:04:57.373516   62197 out.go:204]   - Booting up control plane ...
	I0421 20:04:57.373653   62197 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:04:57.373917   62197 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:04:57.375239   62197 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:04:57.398413   62197 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:04:57.399558   62197 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:04:57.399617   62197 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:04:57.553539   62197 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:04:57.553623   62197 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:04:58.054844   62197 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.816521ms
	I0421 20:04:58.054972   62197 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:05:03.560432   62197 kubeadm.go:309] [api-check] The API server is healthy after 5.502858901s
	I0421 20:05:03.586877   62197 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:05:03.612249   62197 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:05:03.657011   62197 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:05:03.657292   62197 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-727235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:05:03.681951   62197 kubeadm.go:309] [bootstrap-token] Using token: qlvjzn.lyyunat9omiyo08d
	I0421 20:05:03.683979   62197 out.go:204]   - Configuring RBAC rules ...
	I0421 20:05:03.684163   62197 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:05:03.692087   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:05:03.708154   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:05:03.719186   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:05:03.725682   62197 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:05:03.743859   62197 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:05:03.966200   62197 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:05:04.418727   62197 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:05:04.965852   62197 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:05:04.967125   62197 kubeadm.go:309] 
	I0421 20:05:04.967218   62197 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:05:04.967234   62197 kubeadm.go:309] 
	I0421 20:05:04.967347   62197 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:05:04.967364   62197 kubeadm.go:309] 
	I0421 20:05:04.967386   62197 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:05:04.967457   62197 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:05:04.967526   62197 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:05:04.967536   62197 kubeadm.go:309] 
	I0421 20:05:04.967627   62197 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:05:04.967645   62197 kubeadm.go:309] 
	I0421 20:05:04.967719   62197 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:05:04.967737   62197 kubeadm.go:309] 
	I0421 20:05:04.967795   62197 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:05:04.967943   62197 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:05:04.968057   62197 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:05:04.968065   62197 kubeadm.go:309] 
	I0421 20:05:04.968137   62197 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:05:04.968219   62197 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:05:04.968226   62197 kubeadm.go:309] 
	I0421 20:05:04.968342   62197 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qlvjzn.lyyunat9omiyo08d \
	I0421 20:05:04.968485   62197 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 20:05:04.968517   62197 kubeadm.go:309] 	--control-plane 
	I0421 20:05:04.968526   62197 kubeadm.go:309] 
	I0421 20:05:04.968613   62197 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:05:04.968626   62197 kubeadm.go:309] 
	I0421 20:05:04.968729   62197 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qlvjzn.lyyunat9omiyo08d \
	I0421 20:05:04.968880   62197 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 20:05:04.969331   62197 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:05:04.969624   62197 cni.go:84] Creating CNI manager for ""
	I0421 20:05:04.969641   62197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 20:05:04.971771   62197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:05:04.973341   62197 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:05:04.987129   62197 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:05:05.011637   62197 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:05:05.011711   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:05.011764   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-727235 minikube.k8s.io/updated_at=2024_04_21T20_05_05_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=embed-certs-727235 minikube.k8s.io/primary=true
	I0421 20:05:05.067233   62197 ops.go:34] apiserver oom_adj: -16
	I0421 20:05:05.238528   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:05.739469   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:06.238758   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:06.738799   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:07.239324   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:07.738768   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:08.239309   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:08.738788   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:09.239302   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:09.739436   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:10.239021   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:10.738776   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:11.239306   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:11.738999   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:12.238807   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:12.739328   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:13.239138   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:13.739202   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:14.238984   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:14.739315   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:15.239116   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:15.739002   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:16.239284   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:16.738885   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:17.238968   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:17.739159   62197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:17.887030   62197 kubeadm.go:1107] duration metric: took 12.875377625s to wait for elevateKubeSystemPrivileges
	W0421 20:05:17.887075   62197 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:05:17.887084   62197 kubeadm.go:393] duration metric: took 5m15.349737892s to StartCluster
	I0421 20:05:17.887105   62197 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:17.887211   62197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:05:17.889418   62197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:17.889699   62197 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:05:17.890940   62197 out.go:177] * Verifying Kubernetes components...
	I0421 20:05:17.889812   62197 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:05:17.889876   62197 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:05:17.892135   62197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:17.892135   62197 addons.go:69] Setting default-storageclass=true in profile "embed-certs-727235"
	I0421 20:05:17.892262   62197 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-727235"
	I0421 20:05:17.892135   62197 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-727235"
	I0421 20:05:17.892349   62197 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-727235"
	W0421 20:05:17.892368   62197 addons.go:243] addon storage-provisioner should already be in state true
	I0421 20:05:17.892148   62197 addons.go:69] Setting metrics-server=true in profile "embed-certs-727235"
	I0421 20:05:17.892415   62197 addons.go:234] Setting addon metrics-server=true in "embed-certs-727235"
	W0421 20:05:17.892427   62197 addons.go:243] addon metrics-server should already be in state true
	I0421 20:05:17.892448   62197 host.go:66] Checking if "embed-certs-727235" exists ...
	I0421 20:05:17.892454   62197 host.go:66] Checking if "embed-certs-727235" exists ...
	I0421 20:05:17.892696   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.892732   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.892872   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.892894   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.892874   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.893004   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.912112   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0421 20:05:17.912149   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0421 20:05:17.912154   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I0421 20:05:17.912728   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.912823   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.912836   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.913268   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.913288   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.913395   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.913416   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.913576   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.913597   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.913859   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.913868   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.913926   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.914044   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.914443   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.914455   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.914494   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.914554   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.918634   62197 addons.go:234] Setting addon default-storageclass=true in "embed-certs-727235"
	W0421 20:05:17.918658   62197 addons.go:243] addon default-storageclass should already be in state true
	I0421 20:05:17.918690   62197 host.go:66] Checking if "embed-certs-727235" exists ...
	I0421 20:05:17.919046   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.919091   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.934397   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0421 20:05:17.934457   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0421 20:05:17.934844   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.935364   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.935384   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.935717   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.935902   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.936450   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I0421 20:05:17.937200   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.937722   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.937740   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.937806   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.938193   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 20:05:17.938262   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.940253   62197 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:05:17.938565   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.938904   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.941894   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.942116   62197 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:05:17.942127   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:05:17.942140   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 20:05:17.943273   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.943971   62197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:05:17.943997   62197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:05:17.945417   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.945825   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 20:05:17.945844   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.946146   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 20:05:17.946324   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 20:05:17.946545   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 20:05:17.946721   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 20:05:17.947089   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 20:05:17.949422   62197 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0421 20:05:17.950901   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 20:05:17.950918   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 20:05:17.950936   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 20:05:17.954912   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.955319   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 20:05:17.955339   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.955524   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 20:05:17.955671   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 20:05:17.955778   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 20:05:17.955891   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 20:05:17.964056   62197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0421 20:05:17.964584   62197 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:05:17.965120   62197 main.go:141] libmachine: Using API Version  1
	I0421 20:05:17.965154   62197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:05:17.965532   62197 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:05:17.965763   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetState
	I0421 20:05:17.967498   62197 main.go:141] libmachine: (embed-certs-727235) Calling .DriverName
	I0421 20:05:17.967755   62197 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:05:17.967774   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:05:17.967796   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHHostname
	I0421 20:05:17.970713   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.971145   62197 main.go:141] libmachine: (embed-certs-727235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:43:7c", ip: ""} in network mk-embed-certs-727235: {Iface:virbr4 ExpiryTime:2024-04-21 20:59:48 +0000 UTC Type:0 Mac:52:54:00:9c:43:7c Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:embed-certs-727235 Clientid:01:52:54:00:9c:43:7c}
	I0421 20:05:17.971197   62197 main.go:141] libmachine: (embed-certs-727235) DBG | domain embed-certs-727235 has defined IP address 192.168.72.9 and MAC address 52:54:00:9c:43:7c in network mk-embed-certs-727235
	I0421 20:05:17.971310   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHPort
	I0421 20:05:17.971561   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHKeyPath
	I0421 20:05:17.971902   62197 main.go:141] libmachine: (embed-certs-727235) Calling .GetSSHUsername
	I0421 20:05:17.972048   62197 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/embed-certs-727235/id_rsa Username:docker}
	I0421 20:05:18.138650   62197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:05:18.183377   62197 node_ready.go:35] waiting up to 6m0s for node "embed-certs-727235" to be "Ready" ...
	I0421 20:05:18.193012   62197 node_ready.go:49] node "embed-certs-727235" has status "Ready":"True"
	I0421 20:05:18.193041   62197 node_ready.go:38] duration metric: took 9.629767ms for node "embed-certs-727235" to be "Ready" ...
	I0421 20:05:18.193054   62197 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:05:18.204041   62197 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:18.419415   62197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:05:18.447355   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 20:05:18.447380   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0421 20:05:18.453179   62197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:05:18.567668   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 20:05:18.567702   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 20:05:18.626134   62197 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 20:05:18.626159   62197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 20:05:18.735391   62197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 20:05:19.815807   62197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.362600114s)
	I0421 20:05:19.815863   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.815874   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816010   62197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.396559617s)
	I0421 20:05:19.816059   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.816075   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816198   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.816229   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.816246   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.816255   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.816263   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816336   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.816390   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.816411   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.816425   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.816436   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.816578   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.816487   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.816865   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.818141   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:19.818156   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.818178   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:19.862592   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:19.862620   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:19.862896   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:19.862911   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:20.057104   62197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321660879s)
	I0421 20:05:20.057167   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:20.057184   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:20.057475   62197 main.go:141] libmachine: (embed-certs-727235) DBG | Closing plugin on server side
	I0421 20:05:20.057513   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:20.057530   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:20.057543   62197 main.go:141] libmachine: Making call to close driver server
	I0421 20:05:20.057554   62197 main.go:141] libmachine: (embed-certs-727235) Calling .Close
	I0421 20:05:20.057789   62197 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:05:20.057834   62197 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:05:20.057850   62197 addons.go:470] Verifying addon metrics-server=true in "embed-certs-727235"
	I0421 20:05:20.059852   62197 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0421 20:05:20.061799   62197 addons.go:505] duration metric: took 2.171989077s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0421 20:05:20.211929   62197 pod_ready.go:102] pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace has status "Ready":"False"
	I0421 20:05:20.716853   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.716883   62197 pod_ready.go:81] duration metric: took 2.512810672s for pod "coredns-7db6d8ff4d-b7p8r" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.716897   62197 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mjgjp" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.729538   62197 pod_ready.go:92] pod "coredns-7db6d8ff4d-mjgjp" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.729562   62197 pod_ready.go:81] duration metric: took 12.656265ms for pod "coredns-7db6d8ff4d-mjgjp" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.729574   62197 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.734922   62197 pod_ready.go:92] pod "etcd-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.734945   62197 pod_ready.go:81] duration metric: took 5.363976ms for pod "etcd-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.734957   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.744017   62197 pod_ready.go:92] pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.744042   62197 pod_ready.go:81] duration metric: took 9.077653ms for pod "kube-apiserver-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.744052   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.756573   62197 pod_ready.go:92] pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:20.756596   62197 pod_ready.go:81] duration metric: took 12.536659ms for pod "kube-controller-manager-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:20.756609   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zh4fs" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.109950   62197 pod_ready.go:92] pod "kube-proxy-zh4fs" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:21.109979   62197 pod_ready.go:81] duration metric: took 353.361994ms for pod "kube-proxy-zh4fs" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.109994   62197 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.511561   62197 pod_ready.go:92] pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace has status "Ready":"True"
	I0421 20:05:21.511585   62197 pod_ready.go:81] duration metric: took 401.583353ms for pod "kube-scheduler-embed-certs-727235" in "kube-system" namespace to be "Ready" ...
	I0421 20:05:21.511593   62197 pod_ready.go:38] duration metric: took 3.3185271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:05:21.511607   62197 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:05:21.511654   62197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:05:21.529942   62197 api_server.go:72] duration metric: took 3.640186145s to wait for apiserver process to appear ...
	I0421 20:05:21.529968   62197 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:05:21.529989   62197 api_server.go:253] Checking apiserver healthz at https://192.168.72.9:8443/healthz ...
	I0421 20:05:21.534887   62197 api_server.go:279] https://192.168.72.9:8443/healthz returned 200:
	ok
	I0421 20:05:21.535839   62197 api_server.go:141] control plane version: v1.30.0
	I0421 20:05:21.535863   62197 api_server.go:131] duration metric: took 5.887688ms to wait for apiserver health ...
	I0421 20:05:21.535873   62197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:05:21.713348   62197 system_pods.go:59] 9 kube-system pods found
	I0421 20:05:21.713377   62197 system_pods.go:61] "coredns-7db6d8ff4d-b7p8r" [46baeec2-c553-460c-b19a-62c20d04eb00] Running
	I0421 20:05:21.713382   62197 system_pods.go:61] "coredns-7db6d8ff4d-mjgjp" [3d879b9e-8ab5-4ae6-9677-024c7172f9aa] Running
	I0421 20:05:21.713386   62197 system_pods.go:61] "etcd-embed-certs-727235" [105543da-d105-416a-aa27-09cfbd574d1c] Running
	I0421 20:05:21.713389   62197 system_pods.go:61] "kube-apiserver-embed-certs-727235" [bd07efe0-d573-483a-8ea8-7faa6277d53b] Running
	I0421 20:05:21.713393   62197 system_pods.go:61] "kube-controller-manager-embed-certs-727235" [aec17b3e-990e-4ca0-b6bd-1693eba6cb53] Running
	I0421 20:05:21.713396   62197 system_pods.go:61] "kube-proxy-zh4fs" [0b4342b3-19be-43ce-9a60-27dfab04af45] Running
	I0421 20:05:21.713398   62197 system_pods.go:61] "kube-scheduler-embed-certs-727235" [af8aff7d-caf3-46bd-9a73-08c37baeb355] Running
	I0421 20:05:21.713404   62197 system_pods.go:61] "metrics-server-569cc877fc-2vwhn" [4cb94623-a7b9-41e3-a6bc-fcc8b2856365] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:05:21.713408   62197 system_pods.go:61] "storage-provisioner" [63784fb4-2205-4b24-94c8-b11015c21ed6] Running
	I0421 20:05:21.713415   62197 system_pods.go:74] duration metric: took 177.536941ms to wait for pod list to return data ...
	I0421 20:05:21.713422   62197 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:05:21.917809   62197 default_sa.go:45] found service account: "default"
	I0421 20:05:21.917837   62197 default_sa.go:55] duration metric: took 204.409737ms for default service account to be created ...
	I0421 20:05:21.917847   62197 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:05:22.119019   62197 system_pods.go:86] 9 kube-system pods found
	I0421 20:05:22.119051   62197 system_pods.go:89] "coredns-7db6d8ff4d-b7p8r" [46baeec2-c553-460c-b19a-62c20d04eb00] Running
	I0421 20:05:22.119061   62197 system_pods.go:89] "coredns-7db6d8ff4d-mjgjp" [3d879b9e-8ab5-4ae6-9677-024c7172f9aa] Running
	I0421 20:05:22.119066   62197 system_pods.go:89] "etcd-embed-certs-727235" [105543da-d105-416a-aa27-09cfbd574d1c] Running
	I0421 20:05:22.119073   62197 system_pods.go:89] "kube-apiserver-embed-certs-727235" [bd07efe0-d573-483a-8ea8-7faa6277d53b] Running
	I0421 20:05:22.119079   62197 system_pods.go:89] "kube-controller-manager-embed-certs-727235" [aec17b3e-990e-4ca0-b6bd-1693eba6cb53] Running
	I0421 20:05:22.119084   62197 system_pods.go:89] "kube-proxy-zh4fs" [0b4342b3-19be-43ce-9a60-27dfab04af45] Running
	I0421 20:05:22.119090   62197 system_pods.go:89] "kube-scheduler-embed-certs-727235" [af8aff7d-caf3-46bd-9a73-08c37baeb355] Running
	I0421 20:05:22.119101   62197 system_pods.go:89] "metrics-server-569cc877fc-2vwhn" [4cb94623-a7b9-41e3-a6bc-fcc8b2856365] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0421 20:05:22.119108   62197 system_pods.go:89] "storage-provisioner" [63784fb4-2205-4b24-94c8-b11015c21ed6] Running
	I0421 20:05:22.119121   62197 system_pods.go:126] duration metric: took 201.26806ms to wait for k8s-apps to be running ...
	I0421 20:05:22.119130   62197 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:05:22.119178   62197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:05:22.136535   62197 system_svc.go:56] duration metric: took 17.395833ms WaitForService to wait for kubelet
	I0421 20:05:22.136569   62197 kubeadm.go:576] duration metric: took 4.246830881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:05:22.136600   62197 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:05:22.311566   62197 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:05:22.311592   62197 node_conditions.go:123] node cpu capacity is 2
	I0421 20:05:22.311603   62197 node_conditions.go:105] duration metric: took 174.998456ms to run NodePressure ...
	I0421 20:05:22.311612   62197 start.go:240] waiting for startup goroutines ...
	I0421 20:05:22.311618   62197 start.go:245] waiting for cluster config update ...
	I0421 20:05:22.311628   62197 start.go:254] writing updated cluster config ...
	I0421 20:05:22.311880   62197 ssh_runner.go:195] Run: rm -f paused
	I0421 20:05:22.360230   62197 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:05:22.362475   62197 out.go:177] * Done! kubectl is now configured to use "embed-certs-727235" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.452536065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730127452499750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f01c73a3-7170-4ecf-ba04-50421adf25b4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.453066577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9680bdb4-b844-4949-9168-d5589c37d001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.453210457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9680bdb4-b844-4949-9168-d5589c37d001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.453282609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9680bdb4-b844-4949-9168-d5589c37d001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.491579376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62218b15-2028-41e8-b5c2-f4e0460a2ae3 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.491681434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62218b15-2028-41e8-b5c2-f4e0460a2ae3 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.493316623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceffec37-d7ab-4d33-9d4a-e985a2d26a7b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.493719169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730127493691388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceffec37-d7ab-4d33-9d4a-e985a2d26a7b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.494516356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02a77e98-4643-4ff5-845d-d4d29254d55b name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.494592747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02a77e98-4643-4ff5-845d-d4d29254d55b name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.494643101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=02a77e98-4643-4ff5-845d-d4d29254d55b name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.533469139Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e026938e-4e75-4ac9-83dc-fc2a99a7e6d6 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.533537821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e026938e-4e75-4ac9-83dc-fc2a99a7e6d6 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.534875582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e88d3cd-9803-49bc-adc7-4e366e7014f8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.535613327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730127535576727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e88d3cd-9803-49bc-adc7-4e366e7014f8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.536260588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2fe38aa-105e-4478-8f09-1c52ddc2fb3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.536308972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2fe38aa-105e-4478-8f09-1c52ddc2fb3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.536348510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f2fe38aa-105e-4478-8f09-1c52ddc2fb3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.569669221Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=516b2b57-4651-4b0b-b48d-bb3fd89ad7eb name=/runtime.v1.RuntimeService/Version
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.569849229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=516b2b57-4651-4b0b-b48d-bb3fd89ad7eb name=/runtime.v1.RuntimeService/Version
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.578102791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0698419-d5f2-4dd9-ae0a-35c67c331c9c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.578634335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730127578597962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0698419-d5f2-4dd9-ae0a-35c67c331c9c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.579694675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d700b19-e87d-4926-8f42-7c6587dab963 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.579741595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d700b19-e87d-4926-8f42-7c6587dab963 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:08:47 old-k8s-version-867585 crio[653]: time="2024-04-21 20:08:47.579789200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4d700b19-e87d-4926-8f42-7c6587dab963 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr21 19:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052533] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043842] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr21 19:49] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.559572] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.706661] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653397] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.066823] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075953] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.180284] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.150867] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.317680] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +7.956391] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.073092] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.574533] systemd-fstab-generator[968]: Ignoring "noauto" option for root device
	[ +11.346099] kauditd_printk_skb: 46 callbacks suppressed
	[Apr21 19:53] systemd-fstab-generator[4927]: Ignoring "noauto" option for root device
	[Apr21 19:55] systemd-fstab-generator[5208]: Ignoring "noauto" option for root device
	[  +0.069004] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:08:47 up 19 min,  0 users,  load average: 0.09, 0.04, 0.04
	Linux old-k8s-version-867585 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000206de0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000c80360, 0x24, 0x0, ...)
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]: net.(*Dialer).DialContext(0xc0005eb8c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c80360, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00044b760, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c80360, 0x24, 0x1000000000060, 0x7fa8440621c8, 0x118, ...)
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]: net/http.(*Transport).dial(0xc0006ee280, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c80360, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]: net/http.(*Transport).dialConn(0xc0006ee280, 0x4f7fe00, 0xc000120018, 0x0, 0xc00064f500, 0x5, 0xc000c80360, 0x24, 0x0, 0xc000ca8000, ...)
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]: net/http.(*Transport).dialConnFor(0xc0006ee280, 0xc0006b44d0)
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]: created by net/http.(*Transport).queueForDial
	Apr 21 20:08:42 old-k8s-version-867585 kubelet[6679]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 21 20:08:42 old-k8s-version-867585 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 21 20:08:42 old-k8s-version-867585 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 21 20:08:43 old-k8s-version-867585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 136.
	Apr 21 20:08:43 old-k8s-version-867585 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 21 20:08:43 old-k8s-version-867585 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 21 20:08:43 old-k8s-version-867585 kubelet[6689]: I0421 20:08:43.749383    6689 server.go:416] Version: v1.20.0
	Apr 21 20:08:43 old-k8s-version-867585 kubelet[6689]: I0421 20:08:43.749714    6689 server.go:837] Client rotation is on, will bootstrap in background
	Apr 21 20:08:43 old-k8s-version-867585 kubelet[6689]: I0421 20:08:43.751878    6689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 21 20:08:43 old-k8s-version-867585 kubelet[6689]: W0421 20:08:43.752832    6689 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 21 20:08:43 old-k8s-version-867585 kubelet[6689]: I0421 20:08:43.753060    6689 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 2 (253.436732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-867585" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (137.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (405.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-727235 -n embed-certs-727235
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-21 20:21:08.004566266 +0000 UTC m=+7176.528701725
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-727235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-727235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.505µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-727235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-727235 logs -n 25
E0421 20:21:09.207666   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-727235 logs -n 25: (1.35870421s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-474762 sudo iptables                       | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo cat                            | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo cat                            | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo cat                            | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo docker                         | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo cat                            | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo cat                            | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo cat                            | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo cat                            | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo                                | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo find                           | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-474762 sudo crio                           | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-474762                                     | bridge-474762 | jenkins | v1.33.0 | 21 Apr 24 20:14 UTC | 21 Apr 24 20:14 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 20:12:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 20:12:21.326743   73732 out.go:291] Setting OutFile to fd 1 ...
	I0421 20:12:21.326859   73732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:12:21.326870   73732 out.go:304] Setting ErrFile to fd 2...
	I0421 20:12:21.326877   73732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:12:21.327116   73732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 20:12:21.327755   73732 out.go:298] Setting JSON to false
	I0421 20:12:21.328878   73732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6839,"bootTime":1713723502,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 20:12:21.328942   73732 start.go:139] virtualization: kvm guest
	I0421 20:12:21.331330   73732 out.go:177] * [bridge-474762] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 20:12:21.332945   73732 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 20:12:21.334414   73732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 20:12:21.332963   73732 notify.go:220] Checking for updates...
	I0421 20:12:21.335865   73732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:12:21.337290   73732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:12:21.338693   73732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 20:12:21.340049   73732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 20:12:21.341849   73732 config.go:182] Loaded profile config "embed-certs-727235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:21.341955   73732 config.go:182] Loaded profile config "enable-default-cni-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:21.342044   73732 config.go:182] Loaded profile config "flannel-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:21.342163   73732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 20:12:21.379252   73732 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 20:12:21.380597   73732 start.go:297] selected driver: kvm2
	I0421 20:12:21.380609   73732 start.go:901] validating driver "kvm2" against <nil>
	I0421 20:12:21.380620   73732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 20:12:21.381311   73732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:12:21.381386   73732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 20:12:21.397623   73732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 20:12:21.397665   73732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 20:12:21.397859   73732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:12:21.397917   73732 cni.go:84] Creating CNI manager for "bridge"
	I0421 20:12:21.397926   73732 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 20:12:21.397972   73732 start.go:340] cluster config:
	{Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:12:21.398084   73732 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:12:21.399858   73732 out.go:177] * Starting "bridge-474762" primary control-plane node in "bridge-474762" cluster
	I0421 20:12:18.798121   70482 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:12:18.815066   70482 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:12:18.838098   70482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:12:18.838185   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:18.838197   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-474762 minikube.k8s.io/updated_at=2024_04_21T20_12_18_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=enable-default-cni-474762 minikube.k8s.io/primary=true
	I0421 20:12:19.035190   70482 ops.go:34] apiserver oom_adj: -16
	I0421 20:12:19.035322   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:19.535436   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:20.035658   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:20.535758   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:21.035557   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:21.535511   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:22.036413   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:18.379337   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:18.379897   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find current IP address of domain flannel-474762 in network mk-flannel-474762
	I0421 20:12:18.379924   72192 main.go:141] libmachine: (flannel-474762) DBG | I0421 20:12:18.379852   72233 retry.go:31] will retry after 3.592579622s: waiting for machine to come up
	I0421 20:12:21.975794   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:21.976255   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find current IP address of domain flannel-474762 in network mk-flannel-474762
	I0421 20:12:21.976292   72192 main.go:141] libmachine: (flannel-474762) DBG | I0421 20:12:21.976221   72233 retry.go:31] will retry after 3.496699336s: waiting for machine to come up
	I0421 20:12:21.401243   73732 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:12:21.401273   73732 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 20:12:21.401280   73732 cache.go:56] Caching tarball of preloaded images
	I0421 20:12:21.401345   73732 preload.go:173] Found /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0421 20:12:21.401355   73732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0421 20:12:21.401431   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/config.json ...
	I0421 20:12:21.401446   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/config.json: {Name:mk0694007987d491726509cb12151f8bc7d2b0cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:21.401552   73732 start.go:360] acquireMachinesLock for bridge-474762: {Name:mk500410bccf0ae2077d12a7d9160aa55630c445 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:12:22.536125   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:23.036360   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:23.535934   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:24.035889   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:24.536064   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:25.035799   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:25.536246   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:26.036020   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:26.535751   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:27.035539   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:25.474014   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:25.474526   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find current IP address of domain flannel-474762 in network mk-flannel-474762
	I0421 20:12:25.474552   72192 main.go:141] libmachine: (flannel-474762) DBG | I0421 20:12:25.474496   72233 retry.go:31] will retry after 5.979097526s: waiting for machine to come up
	I0421 20:12:27.536115   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:28.035647   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:28.535807   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:29.035500   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:29.536266   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:30.035918   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:30.535542   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:31.036242   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:31.536122   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:32.035424   70482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:32.202448   70482 kubeadm.go:1107] duration metric: took 13.364343795s to wait for elevateKubeSystemPrivileges
	W0421 20:12:32.202492   70482 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:12:32.202502   70482 kubeadm.go:393] duration metric: took 26.040925967s to StartCluster
	I0421 20:12:32.202525   70482 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:32.202596   70482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:12:32.204550   70482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:32.204847   70482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 20:12:32.204862   70482 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:12:32.204930   70482 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-474762"
	I0421 20:12:32.204948   70482 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-474762"
	I0421 20:12:32.204964   70482 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-474762"
	I0421 20:12:32.204990   70482 host.go:66] Checking if "enable-default-cni-474762" exists ...
	I0421 20:12:32.204996   70482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-474762"
	I0421 20:12:32.205031   70482 config.go:182] Loaded profile config "enable-default-cni-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:32.204840   70482 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:12:32.207108   70482 out.go:177] * Verifying Kubernetes components...
	I0421 20:12:32.205471   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.207157   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.205487   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.207192   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.208722   70482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:12:32.223460   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0421 20:12:32.224078   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.224647   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.224671   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.225042   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.225858   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.225891   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.227469   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0421 20:12:32.228199   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.228857   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.228882   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.229351   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.229571   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetState
	I0421 20:12:32.233549   70482 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-474762"
	I0421 20:12:32.233597   70482 host.go:66] Checking if "enable-default-cni-474762" exists ...
	I0421 20:12:32.234025   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.234047   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.243026   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0421 20:12:32.243480   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.244592   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.244609   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.244996   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.245274   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetState
	I0421 20:12:32.247036   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .DriverName
	I0421 20:12:32.248978   70482 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:12:31.456651   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.457178   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has current primary IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.457199   72192 main.go:141] libmachine: (flannel-474762) Found IP for machine: 192.168.61.193
	I0421 20:12:31.457211   72192 main.go:141] libmachine: (flannel-474762) Reserving static IP address...
	I0421 20:12:31.457533   72192 main.go:141] libmachine: (flannel-474762) DBG | unable to find host DHCP lease matching {name: "flannel-474762", mac: "52:54:00:e5:f0:3c", ip: "192.168.61.193"} in network mk-flannel-474762
	I0421 20:12:31.534817   72192 main.go:141] libmachine: (flannel-474762) DBG | Getting to WaitForSSH function...
	I0421 20:12:31.534847   72192 main.go:141] libmachine: (flannel-474762) Reserved static IP address: 192.168.61.193
	I0421 20:12:31.534860   72192 main.go:141] libmachine: (flannel-474762) Waiting for SSH to be available...
	I0421 20:12:31.537540   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.537967   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.537996   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.538131   72192 main.go:141] libmachine: (flannel-474762) DBG | Using SSH client type: external
	I0421 20:12:31.538156   72192 main.go:141] libmachine: (flannel-474762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa (-rw-------)
	I0421 20:12:31.538198   72192 main.go:141] libmachine: (flannel-474762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 20:12:31.538216   72192 main.go:141] libmachine: (flannel-474762) DBG | About to run SSH command:
	I0421 20:12:31.538232   72192 main.go:141] libmachine: (flannel-474762) DBG | exit 0
	I0421 20:12:31.670677   72192 main.go:141] libmachine: (flannel-474762) DBG | SSH cmd err, output: <nil>: 
	I0421 20:12:31.670993   72192 main.go:141] libmachine: (flannel-474762) KVM machine creation complete!
	I0421 20:12:31.671348   72192 main.go:141] libmachine: (flannel-474762) Calling .GetConfigRaw
	I0421 20:12:31.671903   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:31.672101   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:31.672308   72192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 20:12:31.672337   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:12:31.674018   72192 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 20:12:31.674037   72192 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 20:12:31.674045   72192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 20:12:31.674054   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:31.676634   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.677065   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.677101   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.677224   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:31.677426   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.677581   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.677727   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:31.677933   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:31.678206   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:31.678222   72192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 20:12:31.790033   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:31.790084   72192 main.go:141] libmachine: Detecting the provisioner...
	I0421 20:12:31.790094   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:31.792728   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.793156   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.793183   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.793366   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:31.793557   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.793721   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.793854   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:31.793994   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:31.794260   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:31.794279   72192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 20:12:31.907518   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 20:12:31.907609   72192 main.go:141] libmachine: found compatible host: buildroot
	I0421 20:12:31.907632   72192 main.go:141] libmachine: Provisioning with buildroot...
	I0421 20:12:31.907646   72192 main.go:141] libmachine: (flannel-474762) Calling .GetMachineName
	I0421 20:12:31.907914   72192 buildroot.go:166] provisioning hostname "flannel-474762"
	I0421 20:12:31.907944   72192 main.go:141] libmachine: (flannel-474762) Calling .GetMachineName
	I0421 20:12:31.908067   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:31.910582   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.910924   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:31.910961   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:31.911089   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:31.911282   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.911457   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:31.911628   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:31.911821   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:31.911995   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:31.912008   72192 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-474762 && echo "flannel-474762" | sudo tee /etc/hostname
	I0421 20:12:32.046907   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-474762
	
	I0421 20:12:32.046936   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.050349   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.050687   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.050716   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.050949   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:32.051142   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.051311   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.051538   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:32.051760   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:32.051971   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:32.051994   72192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-474762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-474762/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-474762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:12:32.187456   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:32.187486   72192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 20:12:32.187540   72192 buildroot.go:174] setting up certificates
	I0421 20:12:32.187555   72192 provision.go:84] configureAuth start
	I0421 20:12:32.187575   72192 main.go:141] libmachine: (flannel-474762) Calling .GetMachineName
	I0421 20:12:32.187920   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:32.190703   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.191093   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.191123   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.191264   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.193823   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.194130   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.194156   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.194326   72192 provision.go:143] copyHostCerts
	I0421 20:12:32.194388   72192 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 20:12:32.194400   72192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 20:12:32.194484   72192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 20:12:32.194622   72192 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 20:12:32.194636   72192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 20:12:32.194676   72192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 20:12:32.194754   72192 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 20:12:32.194766   72192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 20:12:32.194800   72192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 20:12:32.194919   72192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.flannel-474762 san=[127.0.0.1 192.168.61.193 flannel-474762 localhost minikube]
	I0421 20:12:32.607939   72192 provision.go:177] copyRemoteCerts
	I0421 20:12:32.607991   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:12:32.608017   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.610847   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.611192   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.611245   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.611384   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:32.611573   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.611776   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:32.611927   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.525203   73732 start.go:364] duration metric: took 12.123630486s to acquireMachinesLock for "bridge-474762"
	I0421 20:12:33.525276   73732 start.go:93] Provisioning new machine with config: &{Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:bridge-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:12:33.525458   73732 start.go:125] createHost starting for "" (driver="kvm2")
	I0421 20:12:32.251335   70482 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:12:32.251356   70482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:12:32.251376   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHHostname
	I0421 20:12:32.254886   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.255244   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0421 20:12:32.255433   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:94:25", ip: ""} in network mk-enable-default-cni-474762: {Iface:virbr1 ExpiryTime:2024-04-21 21:11:50 +0000 UTC Type:0 Mac:52:54:00:3e:94:25 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:enable-default-cni-474762 Clientid:01:52:54:00:3e:94:25}
	I0421 20:12:32.255448   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined IP address 192.168.39.147 and MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.255605   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.255692   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHPort
	I0421 20:12:32.255837   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.256262   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.257209   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.257436   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHUsername
	I0421 20:12:32.257586   70482 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/enable-default-cni-474762/id_rsa Username:docker}
	I0421 20:12:32.257720   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.258365   70482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:32.258386   70482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:32.274088   70482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I0421 20:12:32.274737   70482 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:32.275421   70482 main.go:141] libmachine: Using API Version  1
	I0421 20:12:32.275442   70482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:32.275895   70482 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:32.276074   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetState
	I0421 20:12:32.277850   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .DriverName
	I0421 20:12:32.278647   70482 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:12:32.278662   70482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:12:32.278680   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHHostname
	I0421 20:12:32.282461   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.282843   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:94:25", ip: ""} in network mk-enable-default-cni-474762: {Iface:virbr1 ExpiryTime:2024-04-21 21:11:50 +0000 UTC Type:0 Mac:52:54:00:3e:94:25 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:enable-default-cni-474762 Clientid:01:52:54:00:3e:94:25}
	I0421 20:12:32.282864   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | domain enable-default-cni-474762 has defined IP address 192.168.39.147 and MAC address 52:54:00:3e:94:25 in network mk-enable-default-cni-474762
	I0421 20:12:32.283085   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHPort
	I0421 20:12:32.283299   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.283476   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .GetSSHUsername
	I0421 20:12:32.283653   70482 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/enable-default-cni-474762/id_rsa Username:docker}
	I0421 20:12:32.526452   70482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:12:32.526647   70482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 20:12:32.563628   70482 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-474762" to be "Ready" ...
	I0421 20:12:32.606472   70482 node_ready.go:49] node "enable-default-cni-474762" has status "Ready":"True"
	I0421 20:12:32.606496   70482 node_ready.go:38] duration metric: took 42.82555ms for node "enable-default-cni-474762" to be "Ready" ...
	I0421 20:12:32.606508   70482 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:12:32.635956   70482 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace to be "Ready" ...
	I0421 20:12:32.708739   70482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:12:32.796406   70482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:12:33.479655   70482 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0421 20:12:33.479745   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:33.479775   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:33.480076   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | Closing plugin on server side
	I0421 20:12:33.480128   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:33.480136   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:33.480144   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:33.480152   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:33.480368   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:33.480384   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:33.480406   70482 main.go:141] libmachine: (enable-default-cni-474762) DBG | Closing plugin on server side
	I0421 20:12:33.503955   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:33.503978   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:33.504295   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:33.504310   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:34.049278   70482 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-474762" context rescaled to 1 replicas
	I0421 20:12:34.241503   70482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.445052637s)
	I0421 20:12:34.241559   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:34.241573   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:34.241823   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:34.241837   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:34.241847   70482 main.go:141] libmachine: Making call to close driver server
	I0421 20:12:34.241854   70482 main.go:141] libmachine: (enable-default-cni-474762) Calling .Close
	I0421 20:12:34.242160   70482 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:12:34.242174   70482 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:12:34.244044   70482 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0421 20:12:32.704322   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:12:32.733332   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0421 20:12:32.760670   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 20:12:32.791659   72192 provision.go:87] duration metric: took 604.087927ms to configureAuth
	I0421 20:12:32.791686   72192 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:12:32.791888   72192 config.go:182] Loaded profile config "flannel-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:32.791954   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:32.795174   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.795609   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:32.795652   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:32.795817   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:32.796014   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.796183   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:32.796304   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:32.796465   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:32.796689   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:32.796712   72192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 20:12:33.124311   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 20:12:33.124340   72192 main.go:141] libmachine: Checking connection to Docker...
	I0421 20:12:33.124350   72192 main.go:141] libmachine: (flannel-474762) Calling .GetURL
	I0421 20:12:33.125711   72192 main.go:141] libmachine: (flannel-474762) DBG | Using libvirt version 6000000
	I0421 20:12:33.128253   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.128646   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.128679   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.128821   72192 main.go:141] libmachine: Docker is up and running!
	I0421 20:12:33.128839   72192 main.go:141] libmachine: Reticulating splines...
	I0421 20:12:33.128863   72192 client.go:171] duration metric: took 30.396087778s to LocalClient.Create
	I0421 20:12:33.128886   72192 start.go:167] duration metric: took 30.396167257s to libmachine.API.Create "flannel-474762"
	I0421 20:12:33.128898   72192 start.go:293] postStartSetup for "flannel-474762" (driver="kvm2")
	I0421 20:12:33.128911   72192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:12:33.128933   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.129232   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:12:33.129261   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.132028   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.132312   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.132344   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.132547   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.132751   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.132907   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.133083   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.230023   72192 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:12:33.236698   72192 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:12:33.236723   72192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 20:12:33.236797   72192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 20:12:33.236925   72192 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 20:12:33.237036   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:12:33.248554   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:12:33.288956   72192 start.go:296] duration metric: took 160.043514ms for postStartSetup
	I0421 20:12:33.289010   72192 main.go:141] libmachine: (flannel-474762) Calling .GetConfigRaw
	I0421 20:12:33.328895   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:33.332130   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.332650   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.332672   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.333159   72192 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/config.json ...
	I0421 20:12:33.400280   72192 start.go:128] duration metric: took 30.687751144s to createHost
	I0421 20:12:33.400327   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.403706   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.404160   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.404183   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.404364   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.404635   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.404827   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.404995   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.405124   72192 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:33.405345   72192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.193 22 <nil> <nil>}
	I0421 20:12:33.405357   72192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:12:33.525039   72192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713730353.513946423
	
	I0421 20:12:33.525061   72192 fix.go:216] guest clock: 1713730353.513946423
	I0421 20:12:33.525070   72192 fix.go:229] Guest: 2024-04-21 20:12:33.513946423 +0000 UTC Remote: 2024-04-21 20:12:33.400309273 +0000 UTC m=+30.821670180 (delta=113.63715ms)
	I0421 20:12:33.525095   72192 fix.go:200] guest clock delta is within tolerance: 113.63715ms
	I0421 20:12:33.525102   72192 start.go:83] releasing machines lock for "flannel-474762", held for 30.812680837s
	I0421 20:12:33.525133   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.525440   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:33.528379   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.528767   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.528838   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.528954   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.529557   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.529770   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:12:33.529915   72192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:12:33.529957   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.529980   72192 ssh_runner.go:195] Run: cat /version.json
	I0421 20:12:33.530018   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:12:33.533011   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533224   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533415   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.533447   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533606   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.533746   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:33.533781   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:33.533804   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.533942   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:12:33.534111   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:12:33.534190   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.534376   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:12:33.534390   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.534532   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:12:33.657019   72192 ssh_runner.go:195] Run: systemctl --version
	I0421 20:12:33.669786   72192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 20:12:34.137913   72192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 20:12:34.145889   72192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:12:34.145954   72192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:12:34.171202   72192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:12:34.171236   72192 start.go:494] detecting cgroup driver to use...
	I0421 20:12:34.171293   72192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:12:34.197538   72192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:12:34.219387   72192 docker.go:217] disabling cri-docker service (if available) ...
	I0421 20:12:34.219456   72192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 20:12:34.240560   72192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 20:12:34.262374   72192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 20:12:34.423302   72192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 20:12:34.613903   72192 docker.go:233] disabling docker service ...
	I0421 20:12:34.613975   72192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 20:12:34.636521   72192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 20:12:34.656037   72192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 20:12:34.801762   72192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 20:12:34.979812   72192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 20:12:35.002207   72192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:12:35.030369   72192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 20:12:35.030445   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.047623   72192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 20:12:35.047734   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.079619   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.093458   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.108610   72192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:12:35.122721   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.135454   72192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.157606   72192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:12:35.170333   72192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:12:35.180812   72192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 20:12:35.180879   72192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 20:12:35.195621   72192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:12:35.208289   72192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:12:35.366682   72192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 20:12:35.543529   72192 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 20:12:35.543594   72192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 20:12:35.549125   72192 start.go:562] Will wait 60s for crictl version
	I0421 20:12:35.549183   72192 ssh_runner.go:195] Run: which crictl
	I0421 20:12:35.553983   72192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:12:35.597517   72192 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 20:12:35.597620   72192 ssh_runner.go:195] Run: crio --version
	I0421 20:12:35.633341   72192 ssh_runner.go:195] Run: crio --version
	I0421 20:12:35.670906   72192 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 20:12:33.537690   73732 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0421 20:12:33.537933   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:12:33.537991   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:12:33.553837   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0421 20:12:33.554554   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:12:33.558401   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:12:33.558432   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:12:33.559772   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:12:33.560002   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:33.560172   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:12:33.560360   73732 start.go:159] libmachine.API.Create for "bridge-474762" (driver="kvm2")
	I0421 20:12:33.560387   73732 client.go:168] LocalClient.Create starting
	I0421 20:12:33.560427   73732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem
	I0421 20:12:33.560471   73732 main.go:141] libmachine: Decoding PEM data...
	I0421 20:12:33.560489   73732 main.go:141] libmachine: Parsing certificate...
	I0421 20:12:33.560569   73732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem
	I0421 20:12:33.560604   73732 main.go:141] libmachine: Decoding PEM data...
	I0421 20:12:33.560625   73732 main.go:141] libmachine: Parsing certificate...
	I0421 20:12:33.560671   73732 main.go:141] libmachine: Running pre-create checks...
	I0421 20:12:33.560688   73732 main.go:141] libmachine: (bridge-474762) Calling .PreCreateCheck
	I0421 20:12:33.561223   73732 main.go:141] libmachine: (bridge-474762) Calling .GetConfigRaw
	I0421 20:12:33.602748   73732 main.go:141] libmachine: Creating machine...
	I0421 20:12:33.602778   73732 main.go:141] libmachine: (bridge-474762) Calling .Create
	I0421 20:12:33.603098   73732 main.go:141] libmachine: (bridge-474762) Creating KVM machine...
	I0421 20:12:33.604441   73732 main.go:141] libmachine: (bridge-474762) DBG | found existing default KVM network
	I0421 20:12:33.605658   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:33.605477   73861 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:3b:8c} reservation:<nil>}
	I0421 20:12:33.606898   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:33.606789   73861 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001134e0}
	I0421 20:12:33.606927   73732 main.go:141] libmachine: (bridge-474762) DBG | created network xml: 
	I0421 20:12:33.606938   73732 main.go:141] libmachine: (bridge-474762) DBG | <network>
	I0421 20:12:33.606952   73732 main.go:141] libmachine: (bridge-474762) DBG |   <name>mk-bridge-474762</name>
	I0421 20:12:33.606960   73732 main.go:141] libmachine: (bridge-474762) DBG |   <dns enable='no'/>
	I0421 20:12:33.606972   73732 main.go:141] libmachine: (bridge-474762) DBG |   
	I0421 20:12:33.606983   73732 main.go:141] libmachine: (bridge-474762) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0421 20:12:33.606990   73732 main.go:141] libmachine: (bridge-474762) DBG |     <dhcp>
	I0421 20:12:33.607005   73732 main.go:141] libmachine: (bridge-474762) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0421 20:12:33.607025   73732 main.go:141] libmachine: (bridge-474762) DBG |     </dhcp>
	I0421 20:12:33.607037   73732 main.go:141] libmachine: (bridge-474762) DBG |   </ip>
	I0421 20:12:33.607043   73732 main.go:141] libmachine: (bridge-474762) DBG |   
	I0421 20:12:33.607051   73732 main.go:141] libmachine: (bridge-474762) DBG | </network>
	I0421 20:12:33.607059   73732 main.go:141] libmachine: (bridge-474762) DBG | 
	I0421 20:12:33.632680   73732 main.go:141] libmachine: (bridge-474762) DBG | trying to create private KVM network mk-bridge-474762 192.168.50.0/24...
	I0421 20:12:33.722819   73732 main.go:141] libmachine: (bridge-474762) DBG | private KVM network mk-bridge-474762 192.168.50.0/24 created
	I0421 20:12:33.722882   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:33.722728   73861 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:12:33.722916   73732 main.go:141] libmachine: (bridge-474762) Setting up store path in /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762 ...
	I0421 20:12:33.722941   73732 main.go:141] libmachine: (bridge-474762) Building disk image from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 20:12:33.722961   73732 main.go:141] libmachine: (bridge-474762) Downloading /home/jenkins/minikube-integration/18702-3854/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 20:12:34.025437   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:34.025262   73861 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa...
	I0421 20:12:34.129107   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:34.128975   73861 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/bridge-474762.rawdisk...
	I0421 20:12:34.129148   73732 main.go:141] libmachine: (bridge-474762) DBG | Writing magic tar header
	I0421 20:12:34.129164   73732 main.go:141] libmachine: (bridge-474762) DBG | Writing SSH key tar header
	I0421 20:12:34.129177   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:34.129119   73861 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762 ...
	I0421 20:12:34.129252   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762
	I0421 20:12:34.129332   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762 (perms=drwx------)
	I0421 20:12:34.129367   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube/machines (perms=drwxr-xr-x)
	I0421 20:12:34.129381   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube/machines
	I0421 20:12:34.129395   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854/.minikube (perms=drwxr-xr-x)
	I0421 20:12:34.129413   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration/18702-3854 (perms=drwxrwxr-x)
	I0421 20:12:34.129426   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0421 20:12:34.129442   73732 main.go:141] libmachine: (bridge-474762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0421 20:12:34.129453   73732 main.go:141] libmachine: (bridge-474762) Creating domain...
	I0421 20:12:34.129486   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 20:12:34.129516   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18702-3854
	I0421 20:12:34.129534   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0421 20:12:34.129547   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home/jenkins
	I0421 20:12:34.129564   73732 main.go:141] libmachine: (bridge-474762) DBG | Checking permissions on dir: /home
	I0421 20:12:34.129598   73732 main.go:141] libmachine: (bridge-474762) DBG | Skipping /home - not owner
	I0421 20:12:34.130666   73732 main.go:141] libmachine: (bridge-474762) define libvirt domain using xml: 
	I0421 20:12:34.130688   73732 main.go:141] libmachine: (bridge-474762) <domain type='kvm'>
	I0421 20:12:34.130698   73732 main.go:141] libmachine: (bridge-474762)   <name>bridge-474762</name>
	I0421 20:12:34.130706   73732 main.go:141] libmachine: (bridge-474762)   <memory unit='MiB'>3072</memory>
	I0421 20:12:34.130715   73732 main.go:141] libmachine: (bridge-474762)   <vcpu>2</vcpu>
	I0421 20:12:34.130733   73732 main.go:141] libmachine: (bridge-474762)   <features>
	I0421 20:12:34.130741   73732 main.go:141] libmachine: (bridge-474762)     <acpi/>
	I0421 20:12:34.130747   73732 main.go:141] libmachine: (bridge-474762)     <apic/>
	I0421 20:12:34.130758   73732 main.go:141] libmachine: (bridge-474762)     <pae/>
	I0421 20:12:34.130765   73732 main.go:141] libmachine: (bridge-474762)     
	I0421 20:12:34.130773   73732 main.go:141] libmachine: (bridge-474762)   </features>
	I0421 20:12:34.130800   73732 main.go:141] libmachine: (bridge-474762)   <cpu mode='host-passthrough'>
	I0421 20:12:34.130837   73732 main.go:141] libmachine: (bridge-474762)   
	I0421 20:12:34.130866   73732 main.go:141] libmachine: (bridge-474762)   </cpu>
	I0421 20:12:34.130882   73732 main.go:141] libmachine: (bridge-474762)   <os>
	I0421 20:12:34.130903   73732 main.go:141] libmachine: (bridge-474762)     <type>hvm</type>
	I0421 20:12:34.130922   73732 main.go:141] libmachine: (bridge-474762)     <boot dev='cdrom'/>
	I0421 20:12:34.130932   73732 main.go:141] libmachine: (bridge-474762)     <boot dev='hd'/>
	I0421 20:12:34.130940   73732 main.go:141] libmachine: (bridge-474762)     <bootmenu enable='no'/>
	I0421 20:12:34.130947   73732 main.go:141] libmachine: (bridge-474762)   </os>
	I0421 20:12:34.130956   73732 main.go:141] libmachine: (bridge-474762)   <devices>
	I0421 20:12:34.130967   73732 main.go:141] libmachine: (bridge-474762)     <disk type='file' device='cdrom'>
	I0421 20:12:34.130999   73732 main.go:141] libmachine: (bridge-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/boot2docker.iso'/>
	I0421 20:12:34.131020   73732 main.go:141] libmachine: (bridge-474762)       <target dev='hdc' bus='scsi'/>
	I0421 20:12:34.131046   73732 main.go:141] libmachine: (bridge-474762)       <readonly/>
	I0421 20:12:34.131061   73732 main.go:141] libmachine: (bridge-474762)     </disk>
	I0421 20:12:34.131073   73732 main.go:141] libmachine: (bridge-474762)     <disk type='file' device='disk'>
	I0421 20:12:34.131086   73732 main.go:141] libmachine: (bridge-474762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0421 20:12:34.131111   73732 main.go:141] libmachine: (bridge-474762)       <source file='/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/bridge-474762.rawdisk'/>
	I0421 20:12:34.131123   73732 main.go:141] libmachine: (bridge-474762)       <target dev='hda' bus='virtio'/>
	I0421 20:12:34.131133   73732 main.go:141] libmachine: (bridge-474762)     </disk>
	I0421 20:12:34.131143   73732 main.go:141] libmachine: (bridge-474762)     <interface type='network'>
	I0421 20:12:34.131151   73732 main.go:141] libmachine: (bridge-474762)       <source network='mk-bridge-474762'/>
	I0421 20:12:34.131172   73732 main.go:141] libmachine: (bridge-474762)       <model type='virtio'/>
	I0421 20:12:34.131183   73732 main.go:141] libmachine: (bridge-474762)     </interface>
	I0421 20:12:34.131200   73732 main.go:141] libmachine: (bridge-474762)     <interface type='network'>
	I0421 20:12:34.131213   73732 main.go:141] libmachine: (bridge-474762)       <source network='default'/>
	I0421 20:12:34.131223   73732 main.go:141] libmachine: (bridge-474762)       <model type='virtio'/>
	I0421 20:12:34.131231   73732 main.go:141] libmachine: (bridge-474762)     </interface>
	I0421 20:12:34.131242   73732 main.go:141] libmachine: (bridge-474762)     <serial type='pty'>
	I0421 20:12:34.131251   73732 main.go:141] libmachine: (bridge-474762)       <target port='0'/>
	I0421 20:12:34.131262   73732 main.go:141] libmachine: (bridge-474762)     </serial>
	I0421 20:12:34.131272   73732 main.go:141] libmachine: (bridge-474762)     <console type='pty'>
	I0421 20:12:34.131281   73732 main.go:141] libmachine: (bridge-474762)       <target type='serial' port='0'/>
	I0421 20:12:34.131291   73732 main.go:141] libmachine: (bridge-474762)     </console>
	I0421 20:12:34.131300   73732 main.go:141] libmachine: (bridge-474762)     <rng model='virtio'>
	I0421 20:12:34.131312   73732 main.go:141] libmachine: (bridge-474762)       <backend model='random'>/dev/random</backend>
	I0421 20:12:34.131329   73732 main.go:141] libmachine: (bridge-474762)     </rng>
	I0421 20:12:34.131351   73732 main.go:141] libmachine: (bridge-474762)     
	I0421 20:12:34.131365   73732 main.go:141] libmachine: (bridge-474762)     
	I0421 20:12:34.131376   73732 main.go:141] libmachine: (bridge-474762)   </devices>
	I0421 20:12:34.131389   73732 main.go:141] libmachine: (bridge-474762) </domain>
	I0421 20:12:34.131401   73732 main.go:141] libmachine: (bridge-474762) 
	I0421 20:12:34.136759   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:ce:a6:b7 in network default
	I0421 20:12:34.137527   73732 main.go:141] libmachine: (bridge-474762) Ensuring networks are active...
	I0421 20:12:34.137548   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:34.138499   73732 main.go:141] libmachine: (bridge-474762) Ensuring network default is active
	I0421 20:12:34.138920   73732 main.go:141] libmachine: (bridge-474762) Ensuring network mk-bridge-474762 is active
	I0421 20:12:34.139784   73732 main.go:141] libmachine: (bridge-474762) Getting domain xml...
	I0421 20:12:34.140557   73732 main.go:141] libmachine: (bridge-474762) Creating domain...
	I0421 20:12:35.542787   73732 main.go:141] libmachine: (bridge-474762) Waiting to get IP...
	I0421 20:12:35.543828   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:35.544357   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:35.544509   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:35.544428   73861 retry.go:31] will retry after 258.09788ms: waiting for machine to come up
	I0421 20:12:35.803943   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:35.804409   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:35.804429   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:35.804364   73861 retry.go:31] will retry after 322.953644ms: waiting for machine to come up
	I0421 20:12:36.128871   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:36.129435   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:36.129461   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:36.129396   73861 retry.go:31] will retry after 305.862308ms: waiting for machine to come up
	I0421 20:12:34.245578   70482 addons.go:505] duration metric: took 2.040710747s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0421 20:12:34.643126   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:36.647179   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:35.672563   72192 main.go:141] libmachine: (flannel-474762) Calling .GetIP
	I0421 20:12:35.675769   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:35.676150   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:12:35.676178   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:12:35.676478   72192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0421 20:12:35.681283   72192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:12:35.695237   72192 kubeadm.go:877] updating cluster {Name:flannel-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-474762
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:12:35.695376   72192 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:12:35.695416   72192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:12:35.740512   72192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 20:12:35.740573   72192 ssh_runner.go:195] Run: which lz4
	I0421 20:12:35.745048   72192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 20:12:35.749946   72192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 20:12:35.749968   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 20:12:37.580472   72192 crio.go:462] duration metric: took 1.835461419s to copy over tarball
	I0421 20:12:37.580538   72192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:12:36.436833   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:36.437544   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:36.437575   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:36.437496   73861 retry.go:31] will retry after 514.273827ms: waiting for machine to come up
	I0421 20:12:36.953081   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:36.953693   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:36.953718   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:36.953643   73861 retry.go:31] will retry after 481.725809ms: waiting for machine to come up
	I0421 20:12:37.437538   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:37.438241   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:37.438260   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:37.438159   73861 retry.go:31] will retry after 953.112004ms: waiting for machine to come up
	I0421 20:12:38.393130   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:38.393169   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:38.393186   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:38.393033   73861 retry.go:31] will retry after 810.769843ms: waiting for machine to come up
	I0421 20:12:39.205334   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:39.205909   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:39.205933   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:39.205852   73861 retry.go:31] will retry after 984.63759ms: waiting for machine to come up
	I0421 20:12:40.192463   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:40.193017   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:40.193045   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:40.192969   73861 retry.go:31] will retry after 1.246490815s: waiting for machine to come up
	I0421 20:12:39.145300   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:41.816379   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:40.460252   72192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87967524s)
	I0421 20:12:40.460283   72192 crio.go:469] duration metric: took 2.879780165s to extract the tarball
	I0421 20:12:40.460293   72192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:12:40.507379   72192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:12:40.562053   72192 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:12:40.562087   72192 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:12:40.562098   72192 kubeadm.go:928] updating node { 192.168.61.193 8443 v1.30.0 crio true true} ...
	I0421 20:12:40.562196   72192 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-474762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:flannel-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0421 20:12:40.562262   72192 ssh_runner.go:195] Run: crio config
	I0421 20:12:40.613881   72192 cni.go:84] Creating CNI manager for "flannel"
	I0421 20:12:40.613914   72192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:12:40.613936   72192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.193 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-474762 NodeName:flannel-474762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:12:40.614139   72192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-474762"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:12:40.614232   72192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:12:40.626153   72192 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:12:40.626220   72192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:12:40.638100   72192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0421 20:12:40.658861   72192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:12:40.679713   72192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0421 20:12:40.701954   72192 ssh_runner.go:195] Run: grep 192.168.61.193	control-plane.minikube.internal$ /etc/hosts
	I0421 20:12:40.707389   72192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:12:40.723703   72192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:12:40.859146   72192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:12:40.880212   72192 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762 for IP: 192.168.61.193
	I0421 20:12:40.880234   72192 certs.go:194] generating shared ca certs ...
	I0421 20:12:40.880249   72192 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:40.880398   72192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:12:40.880451   72192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:12:40.880464   72192 certs.go:256] generating profile certs ...
	I0421 20:12:40.880532   72192 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.key
	I0421 20:12:40.880550   72192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt with IP's: []
	I0421 20:12:41.077359   72192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt ...
	I0421 20:12:41.077397   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: {Name:mkc17f8da1dbd414399caa0ace4fab4d8d169c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.077594   72192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.key ...
	I0421 20:12:41.077613   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.key: {Name:mk213f46ea1f77448d08b4645527411446138286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.077745   72192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6
	I0421 20:12:41.077768   72192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.193]
	I0421 20:12:41.240564   72192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6 ...
	I0421 20:12:41.240592   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6: {Name:mk29fcd92080aa6ef47d1810b5dd3464b8a192a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.306466   72192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6 ...
	I0421 20:12:41.306508   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6: {Name:mk9dd938fa76d12f420535efdfbf38a92567ab73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.306660   72192 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt.b63625f6 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt
	I0421 20:12:41.306791   72192 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key.b63625f6 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key
	I0421 20:12:41.306880   72192 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key
	I0421 20:12:41.306900   72192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt with IP's: []
	I0421 20:12:41.357681   72192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt ...
	I0421 20:12:41.357707   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt: {Name:mk2a083e2046b9f05e37b262335a9bcd7a0b857b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.358655   72192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key ...
	I0421 20:12:41.358672   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key: {Name:mkbff6c0f3583e74e38c84ab7806698762d4abfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:12:41.358858   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:12:41.358890   72192 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:12:41.358900   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:12:41.358925   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:12:41.358946   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:12:41.358966   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:12:41.359005   72192 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:12:41.359683   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:12:41.390769   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:12:41.417979   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:12:41.452227   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:12:41.483717   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0421 20:12:41.514583   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 20:12:41.564761   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:12:41.598195   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:12:41.691098   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:12:41.723493   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:12:41.755347   72192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:12:41.784237   72192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:12:41.804563   72192 ssh_runner.go:195] Run: openssl version
	I0421 20:12:41.811360   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:12:41.828268   72192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:12:41.833580   72192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:12:41.833635   72192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:12:41.840430   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:12:41.852439   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:12:41.864561   72192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:12:41.870407   72192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:12:41.870502   72192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:12:41.877032   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:12:41.889386   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:12:41.901532   72192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:12:41.908249   72192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:12:41.908303   72192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:12:41.915280   72192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:12:41.929403   72192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:12:41.935376   72192 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:12:41.935442   72192 kubeadm.go:391] StartCluster: {Name:flannel-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-474762 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:12:41.935534   72192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:12:41.935616   72192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:12:41.976978   72192 cri.go:89] found id: ""
	I0421 20:12:41.977037   72192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:12:41.988304   72192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:12:41.998915   72192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:12:42.009394   72192 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:12:42.009419   72192 kubeadm.go:156] found existing configuration files:
	
	I0421 20:12:42.009471   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:12:42.023018   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:12:42.023068   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:12:42.036467   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:12:42.046578   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:12:42.046639   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:12:42.060445   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:12:42.072111   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:12:42.072174   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:12:42.083184   72192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:12:42.094029   72192 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:12:42.094106   72192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:12:42.105504   72192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:12:42.164589   72192 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:12:42.164711   72192 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:12:42.319971   72192 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:12:42.320121   72192 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:12:42.320262   72192 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:12:42.588670   72192 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:12:42.591432   72192 out.go:204]   - Generating certificates and keys ...
	I0421 20:12:42.591528   72192 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:12:42.591608   72192 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:12:42.731282   72192 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 20:12:42.983351   72192 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 20:12:43.095317   72192 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 20:12:43.206072   72192 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 20:12:43.466252   72192 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 20:12:43.466590   72192 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [flannel-474762 localhost] and IPs [192.168.61.193 127.0.0.1 ::1]
	I0421 20:12:43.514694   72192 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 20:12:43.514955   72192 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [flannel-474762 localhost] and IPs [192.168.61.193 127.0.0.1 ::1]
	I0421 20:12:43.889877   72192 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 20:12:43.996775   72192 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 20:12:44.275943   72192 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 20:12:44.276843   72192 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:12:44.499582   72192 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:12:44.593886   72192 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:12:44.917393   72192 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:12:45.061380   72192 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:12:45.406186   72192 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:12:45.407455   72192 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:12:45.410292   72192 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:12:41.441446   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:41.493087   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:41.493118   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:41.441932   73861 retry.go:31] will retry after 1.979730834s: waiting for machine to come up
	I0421 20:12:43.423365   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:43.423901   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:43.423937   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:43.423844   73861 retry.go:31] will retry after 2.804462168s: waiting for machine to come up
	I0421 20:12:46.231392   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:46.231940   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:46.231980   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:46.231882   73861 retry.go:31] will retry after 3.463170537s: waiting for machine to come up
	I0421 20:12:44.144325   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:44.643899   70482 pod_ready.go:97] pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.147 HostIPs:[{IP:192.168.39
.147}] PodIP: PodIPs:[] StartTime:2024-04-21 20:12:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:12:34 +0000 UTC,FinishedAt:2024-04-21 20:12:44 +0000 UTC,ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431 Started:0xc0037a9d00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:12:44.643941   70482 pod_ready.go:81] duration metric: took 12.007952005s for pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace to be "Ready" ...
	E0421 20:12:44.643956   70482 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-rp9bc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:12:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.147 HostIPs:[{IP:192.168.39.147}] PodIP: PodIPs:[] StartTime:2024-04-21 20:12:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:12:34 +0000 UTC,FinishedAt:2024-04-21 20:12:44 +0000 UTC,ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://aeb87ccfd36eb9d80b902179ef0a815d00d8e067383253948b7b578069147431 Started:0xc0037a9d00 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:12:44.643973   70482 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace to be "Ready" ...
	I0421 20:12:46.652416   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:45.412343   72192 out.go:204]   - Booting up control plane ...
	I0421 20:12:45.412481   72192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:12:45.412580   72192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:12:45.413219   72192 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:12:45.433058   72192 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:12:45.435012   72192 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:12:45.435091   72192 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:12:45.580105   72192 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:12:45.580276   72192 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:12:46.580958   72192 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001296402s
	I0421 20:12:46.581082   72192 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:12:49.696701   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:49.697291   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:49.697318   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:49.697227   73861 retry.go:31] will retry after 3.570145567s: waiting for machine to come up
	I0421 20:12:48.653381   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:50.653659   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:52.082209   72192 kubeadm.go:309] [api-check] The API server is healthy after 5.501784628s
	I0421 20:12:52.096008   72192 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:12:52.113812   72192 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:12:52.149476   72192 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:12:52.149747   72192 kubeadm.go:309] [mark-control-plane] Marking the node flannel-474762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:12:52.166919   72192 kubeadm.go:309] [bootstrap-token] Using token: 7uvvlt.zezmhmug9wwgucft
	I0421 20:12:52.168399   72192 out.go:204]   - Configuring RBAC rules ...
	I0421 20:12:52.168519   72192 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:12:52.173908   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:12:52.189769   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:12:52.196946   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:12:52.201186   72192 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:12:52.205617   72192 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:12:52.490661   72192 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:12:52.958707   72192 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:12:53.490742   72192 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:12:53.491944   72192 kubeadm.go:309] 
	I0421 20:12:53.492025   72192 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:12:53.492039   72192 kubeadm.go:309] 
	I0421 20:12:53.492128   72192 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:12:53.492138   72192 kubeadm.go:309] 
	I0421 20:12:53.492194   72192 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:12:53.492276   72192 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:12:53.492364   72192 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:12:53.492374   72192 kubeadm.go:309] 
	I0421 20:12:53.492482   72192 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:12:53.492501   72192 kubeadm.go:309] 
	I0421 20:12:53.492541   72192 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:12:53.492548   72192 kubeadm.go:309] 
	I0421 20:12:53.492591   72192 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:12:53.492708   72192 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:12:53.492816   72192 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:12:53.492884   72192 kubeadm.go:309] 
	I0421 20:12:53.493046   72192 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:12:53.493158   72192 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:12:53.493173   72192 kubeadm.go:309] 
	I0421 20:12:53.493284   72192 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7uvvlt.zezmhmug9wwgucft \
	I0421 20:12:53.493418   72192 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 20:12:53.493449   72192 kubeadm.go:309] 	--control-plane 
	I0421 20:12:53.493464   72192 kubeadm.go:309] 
	I0421 20:12:53.493574   72192 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:12:53.493582   72192 kubeadm.go:309] 
	I0421 20:12:53.493678   72192 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7uvvlt.zezmhmug9wwgucft \
	I0421 20:12:53.493847   72192 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 20:12:53.494566   72192 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:12:53.494610   72192 cni.go:84] Creating CNI manager for "flannel"
	I0421 20:12:53.496633   72192 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0421 20:12:53.271551   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:53.272046   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find current IP address of domain bridge-474762 in network mk-bridge-474762
	I0421 20:12:53.272070   73732 main.go:141] libmachine: (bridge-474762) DBG | I0421 20:12:53.271997   73861 retry.go:31] will retry after 5.239553074s: waiting for machine to come up
	I0421 20:12:53.150597   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:55.152144   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:53.498032   72192 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 20:12:53.505199   72192 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 20:12:53.505219   72192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0421 20:12:53.530750   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 20:12:53.961168   72192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:12:53.961256   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:53.961251   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-474762 minikube.k8s.io/updated_at=2024_04_21T20_12_53_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=flannel-474762 minikube.k8s.io/primary=true
	I0421 20:12:53.983252   72192 ops.go:34] apiserver oom_adj: -16
	I0421 20:12:54.154688   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:54.654773   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:55.154782   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:55.654891   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:56.154706   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:56.655509   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:57.155269   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:58.512902   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.513457   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has current primary IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.513504   73732 main.go:141] libmachine: (bridge-474762) Found IP for machine: 192.168.50.35
	I0421 20:12:58.513530   73732 main.go:141] libmachine: (bridge-474762) Reserving static IP address...
	I0421 20:12:58.513899   73732 main.go:141] libmachine: (bridge-474762) DBG | unable to find host DHCP lease matching {name: "bridge-474762", mac: "52:54:00:46:ee:7b", ip: "192.168.50.35"} in network mk-bridge-474762
	I0421 20:12:58.591525   73732 main.go:141] libmachine: (bridge-474762) DBG | Getting to WaitForSSH function...
	I0421 20:12:58.591556   73732 main.go:141] libmachine: (bridge-474762) Reserved static IP address: 192.168.50.35
	I0421 20:12:58.591569   73732 main.go:141] libmachine: (bridge-474762) Waiting for SSH to be available...
	I0421 20:12:58.594246   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.594710   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.594745   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.594905   73732 main.go:141] libmachine: (bridge-474762) DBG | Using SSH client type: external
	I0421 20:12:58.594926   73732 main.go:141] libmachine: (bridge-474762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa (-rw-------)
	I0421 20:12:58.594953   73732 main.go:141] libmachine: (bridge-474762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0421 20:12:58.594966   73732 main.go:141] libmachine: (bridge-474762) DBG | About to run SSH command:
	I0421 20:12:58.594982   73732 main.go:141] libmachine: (bridge-474762) DBG | exit 0
	I0421 20:12:58.722944   73732 main.go:141] libmachine: (bridge-474762) DBG | SSH cmd err, output: <nil>: 
	I0421 20:12:58.723218   73732 main.go:141] libmachine: (bridge-474762) KVM machine creation complete!
	I0421 20:12:58.723572   73732 main.go:141] libmachine: (bridge-474762) Calling .GetConfigRaw
	I0421 20:12:58.724176   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:12:58.724416   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:12:58.724594   73732 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0421 20:12:58.724612   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:12:58.726199   73732 main.go:141] libmachine: Detecting operating system of created instance...
	I0421 20:12:58.726215   73732 main.go:141] libmachine: Waiting for SSH to be available...
	I0421 20:12:58.726222   73732 main.go:141] libmachine: Getting to WaitForSSH function...
	I0421 20:12:58.726230   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:58.728727   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.729172   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.729198   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.729404   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:58.729574   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.729718   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.729872   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:58.730047   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:58.730341   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:58.730360   73732 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0421 20:12:58.841974   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:58.842017   73732 main.go:141] libmachine: Detecting the provisioner...
	I0421 20:12:58.842030   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:58.844975   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.845324   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.845366   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.845457   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:58.845693   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.845886   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.846096   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:58.846268   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:58.846461   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:58.846476   73732 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0421 20:12:58.951527   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0421 20:12:58.951602   73732 main.go:141] libmachine: found compatible host: buildroot
	I0421 20:12:58.951613   73732 main.go:141] libmachine: Provisioning with buildroot...
	I0421 20:12:58.951621   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:58.951885   73732 buildroot.go:166] provisioning hostname "bridge-474762"
	I0421 20:12:58.951913   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:58.952084   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:58.954961   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.955213   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:58.955235   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:58.955388   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:58.955580   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.955768   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:58.955919   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:58.956077   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:58.956279   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:58.956299   73732 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-474762 && echo "bridge-474762" | sudo tee /etc/hostname
	I0421 20:12:59.076059   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-474762
	
	I0421 20:12:59.076104   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.079018   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.079354   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.079384   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.079600   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:59.079775   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.079956   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.080081   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:59.080255   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:59.080461   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:59.080490   73732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-474762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-474762/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-474762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:12:59.193150   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:12:59.193179   73732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18702-3854/.minikube CaCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18702-3854/.minikube}
	I0421 20:12:59.193238   73732 buildroot.go:174] setting up certificates
	I0421 20:12:59.193251   73732 provision.go:84] configureAuth start
	I0421 20:12:59.193264   73732 main.go:141] libmachine: (bridge-474762) Calling .GetMachineName
	I0421 20:12:59.193555   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:12:59.196640   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.197050   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.197078   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.197266   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.199977   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.200375   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.200404   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.200566   73732 provision.go:143] copyHostCerts
	I0421 20:12:59.200620   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem, removing ...
	I0421 20:12:59.200633   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem
	I0421 20:12:59.200695   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/ca.pem (1078 bytes)
	I0421 20:12:59.200819   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem, removing ...
	I0421 20:12:59.200831   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem
	I0421 20:12:59.200859   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/cert.pem (1123 bytes)
	I0421 20:12:59.200934   73732 exec_runner.go:144] found /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem, removing ...
	I0421 20:12:59.200944   73732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem
	I0421 20:12:59.200967   73732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18702-3854/.minikube/key.pem (1679 bytes)
	I0421 20:12:59.201035   73732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem org=jenkins.bridge-474762 san=[127.0.0.1 192.168.50.35 bridge-474762 localhost minikube]
	I0421 20:12:59.578123   73732 provision.go:177] copyRemoteCerts
	I0421 20:12:59.578186   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:12:59.578222   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.581175   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.581471   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.581500   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.581672   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:59.581873   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.582052   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:59.582244   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:12:59.671308   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 20:12:59.703145   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:12:59.731679   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 20:12:59.763930   73732 provision.go:87] duration metric: took 570.666818ms to configureAuth
	I0421 20:12:59.763963   73732 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:12:59.764215   73732 config.go:182] Loaded profile config "bridge-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:12:59.764313   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:12:59.767256   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.767621   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:12:59.767654   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:12:59.767850   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:12:59.768034   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.768184   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:12:59.768362   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:12:59.768540   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:12:59.768725   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:12:59.768745   73732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0421 20:13:00.070815   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0421 20:13:00.070848   73732 main.go:141] libmachine: Checking connection to Docker...
	I0421 20:13:00.070859   73732 main.go:141] libmachine: (bridge-474762) Calling .GetURL
	I0421 20:13:00.072025   73732 main.go:141] libmachine: (bridge-474762) DBG | Using libvirt version 6000000
	I0421 20:13:00.074632   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.074985   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.075023   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.075178   73732 main.go:141] libmachine: Docker is up and running!
	I0421 20:13:00.075196   73732 main.go:141] libmachine: Reticulating splines...
	I0421 20:13:00.075203   73732 client.go:171] duration metric: took 26.514809444s to LocalClient.Create
	I0421 20:13:00.075232   73732 start.go:167] duration metric: took 26.514871671s to libmachine.API.Create "bridge-474762"
	I0421 20:13:00.075251   73732 start.go:293] postStartSetup for "bridge-474762" (driver="kvm2")
	I0421 20:13:00.075266   73732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:13:00.075291   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.075521   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:13:00.075546   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.077973   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.078401   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.078433   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.078628   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.078823   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.078993   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.079160   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:00.163821   73732 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:13:00.169566   73732 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:13:00.169596   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/addons for local assets ...
	I0421 20:13:00.169670   73732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18702-3854/.minikube/files for local assets ...
	I0421 20:13:00.169786   73732 filesync.go:149] local asset: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem -> 111752.pem in /etc/ssl/certs
	I0421 20:13:00.169925   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:13:00.181471   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:13:00.212417   73732 start.go:296] duration metric: took 137.147897ms for postStartSetup
	I0421 20:13:00.212481   73732 main.go:141] libmachine: (bridge-474762) Calling .GetConfigRaw
	I0421 20:13:00.213152   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:13:00.216240   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.216678   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.216707   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.217057   73732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/config.json ...
	I0421 20:13:00.217333   73732 start.go:128] duration metric: took 26.691860721s to createHost
	I0421 20:13:00.217366   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.220000   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.220313   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.220346   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.220487   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.220701   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.220897   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.221055   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.221226   73732 main.go:141] libmachine: Using SSH client type: native
	I0421 20:13:00.221447   73732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.35 22 <nil> <nil>}
	I0421 20:13:00.221458   73732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:13:00.327556   73732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713730380.306994750
	
	I0421 20:13:00.327600   73732 fix.go:216] guest clock: 1713730380.306994750
	I0421 20:13:00.327613   73732 fix.go:229] Guest: 2024-04-21 20:13:00.30699475 +0000 UTC Remote: 2024-04-21 20:13:00.217351909 +0000 UTC m=+38.939820834 (delta=89.642841ms)
	I0421 20:13:00.327649   73732 fix.go:200] guest clock delta is within tolerance: 89.642841ms
	I0421 20:13:00.327655   73732 start.go:83] releasing machines lock for "bridge-474762", held for 26.802411485s
	I0421 20:13:00.327701   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.328008   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:13:00.330915   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.331259   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.331288   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.331465   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.331923   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.332114   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:00.332228   73732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:13:00.332269   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.332350   73732 ssh_runner.go:195] Run: cat /version.json
	I0421 20:13:00.332375   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:00.334814   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335132   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335164   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.335187   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335354   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.335505   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.335561   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:00.335586   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:00.335691   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.335786   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:00.335877   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:00.335968   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:00.336145   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:00.336304   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:00.411669   73732 ssh_runner.go:195] Run: systemctl --version
	I0421 20:13:00.436757   73732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0421 20:13:00.608302   73732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 20:13:00.615628   73732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:13:00.615684   73732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:13:00.634392   73732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:13:00.634415   73732 start.go:494] detecting cgroup driver to use...
	I0421 20:13:00.634492   73732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:13:00.654407   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:13:00.672781   73732 docker.go:217] disabling cri-docker service (if available) ...
	I0421 20:13:00.672855   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0421 20:13:00.690246   73732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0421 20:13:00.709940   73732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0421 20:13:00.858946   73732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0421 20:13:01.012900   73732 docker.go:233] disabling docker service ...
	I0421 20:13:01.012968   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0421 20:13:01.030449   73732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0421 20:13:01.044904   73732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0421 20:13:01.191789   73732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0421 20:13:01.324317   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0421 20:13:01.341434   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:13:01.363396   73732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0421 20:13:01.363454   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.375746   73732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0421 20:13:01.375836   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.387909   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.401130   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.413072   73732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:13:01.425609   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.437685   73732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.458571   73732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0421 20:13:01.470843   73732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:13:01.481835   73732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0421 20:13:01.481897   73732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0421 20:13:01.497632   73732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:13:01.509462   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:01.645265   73732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0421 20:13:01.842015   73732 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0421 20:13:01.842117   73732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0421 20:13:01.847647   73732 start.go:562] Will wait 60s for crictl version
	I0421 20:13:01.847699   73732 ssh_runner.go:195] Run: which crictl
	I0421 20:13:01.852455   73732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:13:01.902259   73732 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0421 20:13:01.902346   73732 ssh_runner.go:195] Run: crio --version
	I0421 20:13:01.935242   73732 ssh_runner.go:195] Run: crio --version
	I0421 20:13:01.969320   73732 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0421 20:12:57.651527   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:59.657462   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:12:57.655155   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:58.155241   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:58.655272   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:59.154754   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:12:59.655756   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:00.155010   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:00.654868   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:01.155384   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:01.654997   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:02.155290   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:01.970923   73732 main.go:141] libmachine: (bridge-474762) Calling .GetIP
	I0421 20:13:01.973997   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:01.974373   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:01.974400   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:01.974728   73732 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0421 20:13:01.980059   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:13:01.995358   73732 kubeadm.go:877] updating cluster {Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:13:01.995477   73732 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 20:13:01.995537   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:13:02.033071   73732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0421 20:13:02.033139   73732 ssh_runner.go:195] Run: which lz4
	I0421 20:13:02.038142   73732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 20:13:02.042763   73732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 20:13:02.042785   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0421 20:13:03.798993   73732 crio.go:462] duration metric: took 1.760877713s to copy over tarball
	I0421 20:13:03.799081   73732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:13:02.155320   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:04.653886   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:02.655143   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:03.155732   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:03.655254   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:04.154980   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:04.654695   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:05.154770   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:05.655256   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:06.155249   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:06.655609   72192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:07.643449   72192 kubeadm.go:1107] duration metric: took 13.682253671s to wait for elevateKubeSystemPrivileges
	W0421 20:13:07.643486   72192 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:13:07.643493   72192 kubeadm.go:393] duration metric: took 25.708065058s to StartCluster
	I0421 20:13:07.643511   72192 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.643585   72192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:13:07.645549   72192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.645763   72192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 20:13:07.645779   72192 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:13:07.647331   72192 out.go:177] * Verifying Kubernetes components...
	I0421 20:13:07.645814   72192 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:13:07.645992   72192 config.go:182] Loaded profile config "flannel-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:13:07.648695   72192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:07.647400   72192 addons.go:69] Setting storage-provisioner=true in profile "flannel-474762"
	I0421 20:13:07.648814   72192 addons.go:234] Setting addon storage-provisioner=true in "flannel-474762"
	I0421 20:13:07.648908   72192 host.go:66] Checking if "flannel-474762" exists ...
	I0421 20:13:07.647412   72192 addons.go:69] Setting default-storageclass=true in profile "flannel-474762"
	I0421 20:13:07.649205   72192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-474762"
	I0421 20:13:07.649410   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.649462   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.649655   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.649695   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.667623   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0421 20:13:07.668032   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.668559   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.668585   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.668916   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.669104   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:13:07.671367   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I0421 20:13:07.671780   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.672276   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.672306   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.672679   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.673140   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.673173   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.673492   72192 addons.go:234] Setting addon default-storageclass=true in "flannel-474762"
	I0421 20:13:07.673528   72192 host.go:66] Checking if "flannel-474762" exists ...
	I0421 20:13:07.673872   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.673916   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.690241   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0421 20:13:07.690706   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.691226   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.691250   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.691645   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.691889   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:13:07.693529   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:13:07.695485   72192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:13:07.697027   72192 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:07.697045   72192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:13:07.697062   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:13:07.699703   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.700104   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:13:07.700129   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.700257   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:13:07.700441   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:13:07.700604   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:13:07.700741   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:13:07.702488   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0421 20:13:07.702948   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.703401   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.703417   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.703740   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.704256   72192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:07.704301   72192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:07.720661   72192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0421 20:13:07.721240   72192 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:07.721797   72192 main.go:141] libmachine: Using API Version  1
	I0421 20:13:07.721822   72192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:07.722189   72192 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:07.722388   72192 main.go:141] libmachine: (flannel-474762) Calling .GetState
	I0421 20:13:07.724131   72192 main.go:141] libmachine: (flannel-474762) Calling .DriverName
	I0421 20:13:07.724413   72192 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:07.724432   72192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:13:07.724450   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHHostname
	I0421 20:13:07.727442   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.727948   72192 main.go:141] libmachine: (flannel-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:f0:3c", ip: ""} in network mk-flannel-474762: {Iface:virbr3 ExpiryTime:2024-04-21 21:12:21 +0000 UTC Type:0 Mac:52:54:00:e5:f0:3c Iaid: IPaddr:192.168.61.193 Prefix:24 Hostname:flannel-474762 Clientid:01:52:54:00:e5:f0:3c}
	I0421 20:13:07.727970   72192 main.go:141] libmachine: (flannel-474762) DBG | domain flannel-474762 has defined IP address 192.168.61.193 and MAC address 52:54:00:e5:f0:3c in network mk-flannel-474762
	I0421 20:13:07.728008   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHPort
	I0421 20:13:07.728599   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHKeyPath
	I0421 20:13:07.728814   72192 main.go:141] libmachine: (flannel-474762) Calling .GetSSHUsername
	I0421 20:13:07.729002   72192 sshutil.go:53] new ssh client: &{IP:192.168.61.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/flannel-474762/id_rsa Username:docker}
	I0421 20:13:07.937567   72192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:13:07.937740   72192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 20:13:07.979152   72192 node_ready.go:35] waiting up to 15m0s for node "flannel-474762" to be "Ready" ...
	I0421 20:13:08.095582   72192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:08.209889   72192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:08.554346   72192 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0421 20:13:08.554443   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:08.554467   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:08.554830   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:08.554882   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:08.554902   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:08.554919   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:08.554891   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:08.555239   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:08.555287   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:08.555305   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:08.570091   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:08.570120   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:08.570825   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:08.570877   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:08.570892   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:09.002754   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:09.002777   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:09.003055   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:09.003084   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:09.003095   72192 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:09.003107   72192 main.go:141] libmachine: (flannel-474762) Calling .Close
	I0421 20:13:09.003505   72192 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:09.003561   72192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:09.006609   72192 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0421 20:13:09.003507   72192 main.go:141] libmachine: (flannel-474762) DBG | Closing plugin on server side
	I0421 20:13:06.858589   73732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.05947899s)
	I0421 20:13:06.858618   73732 crio.go:469] duration metric: took 3.059597563s to extract the tarball
	I0421 20:13:06.858629   73732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:13:06.899937   73732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0421 20:13:06.960119   73732 crio.go:514] all images are preloaded for cri-o runtime.
	I0421 20:13:06.960139   73732 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:13:06.960146   73732 kubeadm.go:928] updating node { 192.168.50.35 8443 v1.30.0 crio true true} ...
	I0421 20:13:06.960264   73732 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-474762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0421 20:13:06.960363   73732 ssh_runner.go:195] Run: crio config
	I0421 20:13:07.017595   73732 cni.go:84] Creating CNI manager for "bridge"
	I0421 20:13:07.017626   73732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:13:07.017649   73732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.35 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-474762 NodeName:bridge-474762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:13:07.017797   73732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-474762"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:13:07.017852   73732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:13:07.029889   73732 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:13:07.029962   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:13:07.040628   73732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0421 20:13:07.063906   73732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:13:07.082951   73732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0421 20:13:07.102288   73732 ssh_runner.go:195] Run: grep 192.168.50.35	control-plane.minikube.internal$ /etc/hosts
	I0421 20:13:07.106666   73732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.35	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:13:07.120702   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:07.265981   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:13:07.285107   73732 certs.go:68] Setting up /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762 for IP: 192.168.50.35
	I0421 20:13:07.285130   73732 certs.go:194] generating shared ca certs ...
	I0421 20:13:07.285149   73732 certs.go:226] acquiring lock for ca certs: {Name:mkaca3dfdda4c6795a8c3f402fdee85570873fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.285368   73732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key
	I0421 20:13:07.285427   73732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key
	I0421 20:13:07.285448   73732 certs.go:256] generating profile certs ...
	I0421 20:13:07.285517   73732 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.key
	I0421 20:13:07.285536   73732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt with IP's: []
	I0421 20:13:07.605681   73732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt ...
	I0421 20:13:07.605719   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: {Name:mk38bef37a27f99facbe20e2098d106558015f16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.605932   73732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.key ...
	I0421 20:13:07.605949   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.key: {Name:mk7af3c804a2486eec74e2c8abd8813e7941b34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.606079   73732 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5
	I0421 20:13:07.606101   73732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.35]
	I0421 20:13:07.764263   73732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5 ...
	I0421 20:13:07.764291   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5: {Name:mk134af05868bf23ad3534ea8aaefa1f3c91ed55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.764436   73732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5 ...
	I0421 20:13:07.764454   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5: {Name:mkb6ed907eb4b4b4bcb788b6ee72b93cf7939671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.764566   73732 certs.go:381] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt.8a16aef5 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt
	I0421 20:13:07.764678   73732 certs.go:385] copying /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key.8a16aef5 -> /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key
	I0421 20:13:07.764735   73732 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key
	I0421 20:13:07.764750   73732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt with IP's: []
	I0421 20:13:07.966757   73732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt ...
	I0421 20:13:07.966784   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt: {Name:mk930878172e737a3210d35d0129c249edfa25c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.966970   73732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key ...
	I0421 20:13:07.966989   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key: {Name:mk1d320a394635b7646a07c0714737b624ac242f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:07.967231   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem (1338 bytes)
	W0421 20:13:07.967273   73732 certs.go:480] ignoring /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175_empty.pem, impossibly tiny 0 bytes
	I0421 20:13:07.967325   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca-key.pem (1675 bytes)
	I0421 20:13:07.967359   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/ca.pem (1078 bytes)
	I0421 20:13:07.967390   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/cert.pem (1123 bytes)
	I0421 20:13:07.967420   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/certs/key.pem (1679 bytes)
	I0421 20:13:07.967476   73732 certs.go:484] found cert: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem (1708 bytes)
	I0421 20:13:07.968251   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:13:08.003296   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:13:08.032115   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:13:08.059930   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:13:08.094029   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0421 20:13:08.126679   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 20:13:08.157460   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:13:08.193094   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 20:13:08.228277   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/certs/11175.pem --> /usr/share/ca-certificates/11175.pem (1338 bytes)
	I0421 20:13:08.267056   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/ssl/certs/111752.pem --> /usr/share/ca-certificates/111752.pem (1708 bytes)
	I0421 20:13:08.301616   73732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:13:08.336511   73732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:13:08.360262   73732 ssh_runner.go:195] Run: openssl version
	I0421 20:13:08.367585   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11175.pem && ln -fs /usr/share/ca-certificates/11175.pem /etc/ssl/certs/11175.pem"
	I0421 20:13:08.381306   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11175.pem
	I0421 20:13:08.386781   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:35 /usr/share/ca-certificates/11175.pem
	I0421 20:13:08.386846   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11175.pem
	I0421 20:13:08.393759   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11175.pem /etc/ssl/certs/51391683.0"
	I0421 20:13:08.406828   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111752.pem && ln -fs /usr/share/ca-certificates/111752.pem /etc/ssl/certs/111752.pem"
	I0421 20:13:08.419328   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111752.pem
	I0421 20:13:08.425353   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:35 /usr/share/ca-certificates/111752.pem
	I0421 20:13:08.425419   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111752.pem
	I0421 20:13:08.432115   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111752.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:13:08.445176   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:13:08.459186   73732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:13:08.464852   73732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:23 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:13:08.464965   73732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:13:08.472060   73732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:13:08.484653   73732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:13:08.489871   73732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:13:08.489938   73732 kubeadm.go:391] StartCluster: {Name:bridge-474762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-474762 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:13:08.490032   73732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0421 20:13:08.490115   73732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0421 20:13:08.542184   73732 cri.go:89] found id: ""
	I0421 20:13:08.542274   73732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:13:08.557818   73732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:13:08.570702   73732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:13:08.583601   73732 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:13:08.583623   73732 kubeadm.go:156] found existing configuration files:
	
	I0421 20:13:08.583668   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:13:08.595186   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:13:08.595265   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:13:08.608307   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:13:08.623730   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:13:08.623812   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:13:08.637568   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:13:08.649870   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:13:08.649935   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:13:08.664876   73732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:13:08.681703   73732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:13:08.681766   73732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:13:08.712011   73732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:13:08.784150   73732 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:13:08.784240   73732 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:13:08.937564   73732 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:13:08.937707   73732 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:13:08.937833   73732 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:13:09.243153   73732 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:13:09.245263   73732 out.go:204]   - Generating certificates and keys ...
	I0421 20:13:09.245398   73732 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:13:09.245529   73732 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:13:09.471208   73732 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 20:13:09.591901   73732 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 20:13:09.768935   73732 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 20:13:09.957888   73732 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 20:13:10.078525   73732 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 20:13:10.078684   73732 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [bridge-474762 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	I0421 20:13:10.240646   73732 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 20:13:10.240834   73732 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [bridge-474762 localhost] and IPs [192.168.50.35 127.0.0.1 ::1]
	I0421 20:13:10.458251   73732 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 20:13:10.795103   73732 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 20:13:10.986823   73732 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 20:13:10.986910   73732 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:13:11.127092   73732 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:13:11.439115   73732 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:13:11.532698   73732 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:13:11.700537   73732 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:13:11.963479   73732 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:13:11.966199   73732 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:13:11.974389   73732 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:13:07.152885   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:09.652064   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:09.008179   72192 addons.go:505] duration metric: took 1.362368384s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0421 20:13:09.059435   72192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-474762" context rescaled to 1 replicas
	I0421 20:13:09.983449   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:12.485090   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:12.152339   70482 pod_ready.go:102] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:13.153753   70482 pod_ready.go:92] pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.153783   70482 pod_ready.go:81] duration metric: took 28.509799697s for pod "coredns-7db6d8ff4d-xn48s" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.153797   70482 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.161854   70482 pod_ready.go:92] pod "etcd-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.161877   70482 pod_ready.go:81] duration metric: took 8.071208ms for pod "etcd-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.161892   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.168354   70482 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.168377   70482 pod_ready.go:81] duration metric: took 6.476734ms for pod "kube-apiserver-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.168390   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.173246   70482 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.173271   70482 pod_ready.go:81] duration metric: took 4.871919ms for pod "kube-controller-manager-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.173282   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-wgg4k" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.177972   70482 pod_ready.go:92] pod "kube-proxy-wgg4k" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.177997   70482 pod_ready.go:81] duration metric: took 4.706452ms for pod "kube-proxy-wgg4k" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.178009   70482 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.549496   70482 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:13.549528   70482 pod_ready.go:81] duration metric: took 371.510124ms for pod "kube-scheduler-enable-default-cni-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:13.549539   70482 pod_ready.go:38] duration metric: took 40.943019237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:13.549556   70482 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:13:13.549615   70482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:13:13.572571   70482 api_server.go:72] duration metric: took 41.367404134s to wait for apiserver process to appear ...
	I0421 20:13:13.572610   70482 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:13:13.572641   70482 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0421 20:13:13.577758   70482 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0421 20:13:13.579081   70482 api_server.go:141] control plane version: v1.30.0
	I0421 20:13:13.579104   70482 api_server.go:131] duration metric: took 6.485234ms to wait for apiserver health ...
	I0421 20:13:13.579114   70482 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:13:13.751726   70482 system_pods.go:59] 7 kube-system pods found
	I0421 20:13:13.751765   70482 system_pods.go:61] "coredns-7db6d8ff4d-xn48s" [0de9c7fe-f4ff-4fa7-975f-e5d997794cc0] Running
	I0421 20:13:13.751772   70482 system_pods.go:61] "etcd-enable-default-cni-474762" [94751a3f-7155-4898-a58a-dec8f3dbfeb9] Running
	I0421 20:13:13.751776   70482 system_pods.go:61] "kube-apiserver-enable-default-cni-474762" [9123f173-e342-4d62-a0a7-5c1af286a9e3] Running
	I0421 20:13:13.751780   70482 system_pods.go:61] "kube-controller-manager-enable-default-cni-474762" [6194a232-ff72-48a5-a5ed-30f318f551b1] Running
	I0421 20:13:13.751783   70482 system_pods.go:61] "kube-proxy-wgg4k" [f625ecf0-3d23-433a-9a09-ab316cafb2f0] Running
	I0421 20:13:13.751786   70482 system_pods.go:61] "kube-scheduler-enable-default-cni-474762" [e5eed32b-7fb6-485c-ae85-023720b92a69] Running
	I0421 20:13:13.751789   70482 system_pods.go:61] "storage-provisioner" [8cd24301-2b24-4237-8e9d-475a64634f41] Running
	I0421 20:13:13.751796   70482 system_pods.go:74] duration metric: took 172.674755ms to wait for pod list to return data ...
	I0421 20:13:13.751803   70482 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:13:13.948189   70482 default_sa.go:45] found service account: "default"
	I0421 20:13:13.948224   70482 default_sa.go:55] duration metric: took 196.415194ms for default service account to be created ...
	I0421 20:13:13.948233   70482 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:13:14.152936   70482 system_pods.go:86] 7 kube-system pods found
	I0421 20:13:14.152964   70482 system_pods.go:89] "coredns-7db6d8ff4d-xn48s" [0de9c7fe-f4ff-4fa7-975f-e5d997794cc0] Running
	I0421 20:13:14.152970   70482 system_pods.go:89] "etcd-enable-default-cni-474762" [94751a3f-7155-4898-a58a-dec8f3dbfeb9] Running
	I0421 20:13:14.152975   70482 system_pods.go:89] "kube-apiserver-enable-default-cni-474762" [9123f173-e342-4d62-a0a7-5c1af286a9e3] Running
	I0421 20:13:14.152979   70482 system_pods.go:89] "kube-controller-manager-enable-default-cni-474762" [6194a232-ff72-48a5-a5ed-30f318f551b1] Running
	I0421 20:13:14.152983   70482 system_pods.go:89] "kube-proxy-wgg4k" [f625ecf0-3d23-433a-9a09-ab316cafb2f0] Running
	I0421 20:13:14.152987   70482 system_pods.go:89] "kube-scheduler-enable-default-cni-474762" [e5eed32b-7fb6-485c-ae85-023720b92a69] Running
	I0421 20:13:14.152991   70482 system_pods.go:89] "storage-provisioner" [8cd24301-2b24-4237-8e9d-475a64634f41] Running
	I0421 20:13:14.152996   70482 system_pods.go:126] duration metric: took 204.758306ms to wait for k8s-apps to be running ...
	I0421 20:13:14.153004   70482 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:13:14.153043   70482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:13:14.172989   70482 system_svc.go:56] duration metric: took 19.974815ms WaitForService to wait for kubelet
	I0421 20:13:14.173028   70482 kubeadm.go:576] duration metric: took 41.967867255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:13:14.173054   70482 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:13:14.350556   70482 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:13:14.350587   70482 node_conditions.go:123] node cpu capacity is 2
	I0421 20:13:14.350601   70482 node_conditions.go:105] duration metric: took 177.541558ms to run NodePressure ...
	I0421 20:13:14.350616   70482 start.go:240] waiting for startup goroutines ...
	I0421 20:13:14.350626   70482 start.go:245] waiting for cluster config update ...
	I0421 20:13:14.350639   70482 start.go:254] writing updated cluster config ...
	I0421 20:13:14.350986   70482 ssh_runner.go:195] Run: rm -f paused
	I0421 20:13:14.416852   70482 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:13:14.418906   70482 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-474762" cluster and "default" namespace by default
	I0421 20:13:11.975842   73732 out.go:204]   - Booting up control plane ...
	I0421 20:13:11.975958   73732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:13:11.976062   73732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:13:11.976624   73732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:13:12.005975   73732 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:13:12.006154   73732 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:13:12.006210   73732 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:13:12.164312   73732 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:13:12.164415   73732 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:13:12.665474   73732 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.663722ms
	I0421 20:13:12.665586   73732 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:13:14.486548   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:16.983558   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:18.166872   73732 kubeadm.go:309] [api-check] The API server is healthy after 5.502240103s
	I0421 20:13:18.194686   73732 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:13:18.218931   73732 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:13:18.306951   73732 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:13:18.307196   73732 kubeadm.go:309] [mark-control-plane] Marking the node bridge-474762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:13:18.335811   73732 kubeadm.go:309] [bootstrap-token] Using token: jlj9t3.y9mg1ccu6iugp1il
	I0421 20:13:18.338390   73732 out.go:204]   - Configuring RBAC rules ...
	I0421 20:13:18.338527   73732 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:13:18.351387   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:13:18.384358   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:13:18.400734   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:13:18.420787   73732 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:13:18.430500   73732 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:13:18.578363   73732 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:13:19.026211   73732 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:13:19.840397   73732 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:13:19.841701   73732 kubeadm.go:309] 
	I0421 20:13:19.841815   73732 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:13:19.841842   73732 kubeadm.go:309] 
	I0421 20:13:19.841952   73732 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:13:19.841963   73732 kubeadm.go:309] 
	I0421 20:13:19.842004   73732 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:13:19.842117   73732 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:13:19.842202   73732 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:13:19.842213   73732 kubeadm.go:309] 
	I0421 20:13:19.842286   73732 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:13:19.842297   73732 kubeadm.go:309] 
	I0421 20:13:19.842352   73732 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:13:19.842362   73732 kubeadm.go:309] 
	I0421 20:13:19.842430   73732 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:13:19.842512   73732 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:13:19.842596   73732 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:13:19.842607   73732 kubeadm.go:309] 
	I0421 20:13:19.842730   73732 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:13:19.842850   73732 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:13:19.842858   73732 kubeadm.go:309] 
	I0421 20:13:19.842976   73732 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jlj9t3.y9mg1ccu6iugp1il \
	I0421 20:13:19.843160   73732 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 \
	I0421 20:13:19.843200   73732 kubeadm.go:309] 	--control-plane 
	I0421 20:13:19.843218   73732 kubeadm.go:309] 
	I0421 20:13:19.843348   73732 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:13:19.843370   73732 kubeadm.go:309] 
	I0421 20:13:19.843513   73732 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jlj9t3.y9mg1ccu6iugp1il \
	I0421 20:13:19.843662   73732 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6ce79058ecd745bf170e5a070d500fd7071ba55e6785e2f2a94d55da544bd38 
	I0421 20:13:19.843925   73732 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:13:19.843967   73732 cni.go:84] Creating CNI manager for "bridge"
	I0421 20:13:19.853961   73732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 20:13:19.855837   73732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 20:13:19.871124   73732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 20:13:19.899860   73732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:13:19.900002   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:19.900095   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-474762 minikube.k8s.io/updated_at=2024_04_21T20_13_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=bridge-474762 minikube.k8s.io/primary=true
	I0421 20:13:20.112485   73732 ops.go:34] apiserver oom_adj: -16
	I0421 20:13:20.112601   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:20.612947   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:21.113482   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:18.983729   72192 node_ready.go:53] node "flannel-474762" has status "Ready":"False"
	I0421 20:13:20.483066   72192 node_ready.go:49] node "flannel-474762" has status "Ready":"True"
	I0421 20:13:20.483091   72192 node_ready.go:38] duration metric: took 12.503897106s for node "flannel-474762" to be "Ready" ...
	I0421 20:13:20.483103   72192 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:20.490733   72192 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:22.497745   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:21.612638   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:22.113624   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:22.613407   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:23.113479   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:23.613270   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:24.113231   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:24.612659   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:25.113113   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:25.613596   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:26.113330   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:24.498086   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:26.997873   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:26.613367   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:27.112631   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:27.612934   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:28.113238   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:28.613334   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:29.113110   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:29.613387   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:30.113077   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:30.613151   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:31.112853   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:31.613369   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:32.113262   73732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:13:32.270897   73732 kubeadm.go:1107] duration metric: took 12.370941451s to wait for elevateKubeSystemPrivileges
	W0421 20:13:32.270939   73732 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:13:32.270948   73732 kubeadm.go:393] duration metric: took 23.781015701s to StartCluster
	I0421 20:13:32.270970   73732 settings.go:142] acquiring lock: {Name:mk8f62ee3af6b6ee06d5d98fdb685e1be7694fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:32.271042   73732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 20:13:32.273002   73732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/kubeconfig: {Name:mkc7241d165900c8a9d26e9aa1382a3d41519db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:13:32.273221   73732 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.50.35 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0421 20:13:32.275028   73732 out.go:177] * Verifying Kubernetes components...
	I0421 20:13:32.273320   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 20:13:32.273343   73732 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:13:32.273505   73732 config.go:182] Loaded profile config "bridge-474762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 20:13:32.276834   73732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:13:32.276978   73732 addons.go:69] Setting storage-provisioner=true in profile "bridge-474762"
	I0421 20:13:32.277005   73732 addons.go:234] Setting addon storage-provisioner=true in "bridge-474762"
	I0421 20:13:32.277032   73732 host.go:66] Checking if "bridge-474762" exists ...
	I0421 20:13:32.277391   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.277408   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.277474   73732 addons.go:69] Setting default-storageclass=true in profile "bridge-474762"
	I0421 20:13:32.277503   73732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-474762"
	I0421 20:13:32.277873   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.277896   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.294466   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I0421 20:13:32.294689   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0421 20:13:32.294967   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.295054   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.295476   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.295490   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.295836   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.296440   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.296464   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.296701   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.296718   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.298802   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.299003   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:13:32.303342   73732 addons.go:234] Setting addon default-storageclass=true in "bridge-474762"
	I0421 20:13:32.303383   73732 host.go:66] Checking if "bridge-474762" exists ...
	I0421 20:13:32.303733   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.303762   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.314810   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0421 20:13:32.315273   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.315733   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.315749   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.316076   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.316266   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:13:32.317782   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:32.319768   73732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:13:29.498089   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:31.998654   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:32.321159   73732 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:32.321177   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:13:32.321194   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:32.323894   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.324604   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0421 20:13:32.324967   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.325345   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:32.325364   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.325514   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:32.325650   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:32.326030   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.326048   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.326223   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:32.326339   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:32.326566   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.327005   73732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 20:13:32.327036   73732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 20:13:32.346651   73732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I0421 20:13:32.347069   73732 main.go:141] libmachine: () Calling .GetVersion
	I0421 20:13:32.347564   73732 main.go:141] libmachine: Using API Version  1
	I0421 20:13:32.347582   73732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 20:13:32.347960   73732 main.go:141] libmachine: () Calling .GetMachineName
	I0421 20:13:32.348175   73732 main.go:141] libmachine: (bridge-474762) Calling .GetState
	I0421 20:13:32.353346   73732 main.go:141] libmachine: (bridge-474762) Calling .DriverName
	I0421 20:13:32.353664   73732 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:32.353681   73732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:13:32.353700   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHHostname
	I0421 20:13:32.356322   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.356675   73732 main.go:141] libmachine: (bridge-474762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ee:7b", ip: ""} in network mk-bridge-474762: {Iface:virbr2 ExpiryTime:2024-04-21 21:12:51 +0000 UTC Type:0 Mac:52:54:00:46:ee:7b Iaid: IPaddr:192.168.50.35 Prefix:24 Hostname:bridge-474762 Clientid:01:52:54:00:46:ee:7b}
	I0421 20:13:32.356697   73732 main.go:141] libmachine: (bridge-474762) DBG | domain bridge-474762 has defined IP address 192.168.50.35 and MAC address 52:54:00:46:ee:7b in network mk-bridge-474762
	I0421 20:13:32.356814   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHPort
	I0421 20:13:32.356974   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHKeyPath
	I0421 20:13:32.357099   73732 main.go:141] libmachine: (bridge-474762) Calling .GetSSHUsername
	I0421 20:13:32.357210   73732 sshutil.go:53] new ssh client: &{IP:192.168.50.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/bridge-474762/id_rsa Username:docker}
	I0421 20:13:32.592039   73732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:13:32.673336   73732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 20:13:32.674714   73732 node_ready.go:35] waiting up to 15m0s for node "bridge-474762" to be "Ready" ...
	I0421 20:13:32.738875   73732 node_ready.go:49] node "bridge-474762" has status "Ready":"True"
	I0421 20:13:32.738908   73732 node_ready.go:38] duration metric: took 64.170466ms for node "bridge-474762" to be "Ready" ...
	I0421 20:13:32.738920   73732 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:32.784377   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:13:32.814553   73732 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:32.844522   73732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:13:33.586943   73732 start.go:946] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0421 20:13:33.587016   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:33.587045   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:33.587313   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:33.587335   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:33.587340   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:33.587363   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:33.587371   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:33.587752   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:33.587805   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:33.587816   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:33.598277   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:33.598298   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:33.598682   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:33.598684   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:33.598703   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:34.098597   73732 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-474762" context rescaled to 1 replicas
	I0421 20:13:34.252141   73732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.407581868s)
	I0421 20:13:34.252194   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:34.252208   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:34.252588   73732 main.go:141] libmachine: (bridge-474762) DBG | Closing plugin on server side
	I0421 20:13:34.252621   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:34.252637   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:34.252651   73732 main.go:141] libmachine: Making call to close driver server
	I0421 20:13:34.252660   73732 main.go:141] libmachine: (bridge-474762) Calling .Close
	I0421 20:13:34.252905   73732 main.go:141] libmachine: Successfully made call to close driver server
	I0421 20:13:34.252918   73732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0421 20:13:34.254729   73732 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0421 20:13:34.256291   73732 addons.go:505] duration metric: took 1.982948284s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0421 20:13:34.826718   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:34.007036   72192 pod_ready.go:102] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:35.499399   72192 pod_ready.go:92] pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.499421   72192 pod_ready.go:81] duration metric: took 15.008658343s for pod "coredns-7db6d8ff4d-2sh9b" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.499430   72192 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.505075   72192 pod_ready.go:92] pod "etcd-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.505098   72192 pod_ready.go:81] duration metric: took 5.659703ms for pod "etcd-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.505110   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.509795   72192 pod_ready.go:92] pod "kube-apiserver-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.509814   72192 pod_ready.go:81] duration metric: took 4.694619ms for pod "kube-apiserver-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.509825   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.514945   72192 pod_ready.go:92] pod "kube-controller-manager-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.514966   72192 pod_ready.go:81] duration metric: took 5.132029ms for pod "kube-controller-manager-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.514979   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-4gmfm" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.519804   72192 pod_ready.go:92] pod "kube-proxy-4gmfm" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.519834   72192 pod_ready.go:81] duration metric: took 4.846952ms for pod "kube-proxy-4gmfm" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.519853   72192 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.896620   72192 pod_ready.go:92] pod "kube-scheduler-flannel-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:13:35.896650   72192 pod_ready.go:81] duration metric: took 376.789363ms for pod "kube-scheduler-flannel-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:35.896661   72192 pod_ready.go:38] duration metric: took 15.413547538s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:13:35.896675   72192 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:13:35.896726   72192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:13:35.917584   72192 api_server.go:72] duration metric: took 28.271775974s to wait for apiserver process to appear ...
	I0421 20:13:35.917611   72192 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:13:35.917632   72192 api_server.go:253] Checking apiserver healthz at https://192.168.61.193:8443/healthz ...
	I0421 20:13:35.923813   72192 api_server.go:279] https://192.168.61.193:8443/healthz returned 200:
	ok
	I0421 20:13:35.925274   72192 api_server.go:141] control plane version: v1.30.0
	I0421 20:13:35.925293   72192 api_server.go:131] duration metric: took 7.674656ms to wait for apiserver health ...
	I0421 20:13:35.925303   72192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:13:36.100733   72192 system_pods.go:59] 7 kube-system pods found
	I0421 20:13:36.100768   72192 system_pods.go:61] "coredns-7db6d8ff4d-2sh9b" [1f8f4071-8007-4f4b-8b9a-8b24f1548b3c] Running
	I0421 20:13:36.100776   72192 system_pods.go:61] "etcd-flannel-474762" [81d3d998-92c8-42b3-8c04-996a538e51ad] Running
	I0421 20:13:36.100781   72192 system_pods.go:61] "kube-apiserver-flannel-474762" [304ec604-34e2-4acf-9731-c02e79ed97af] Running
	I0421 20:13:36.100786   72192 system_pods.go:61] "kube-controller-manager-flannel-474762" [b8cd61c8-b9c3-4a1b-98da-becd00c4d3fe] Running
	I0421 20:13:36.100791   72192 system_pods.go:61] "kube-proxy-4gmfm" [b98d303b-12ea-4d1d-9c9c-768eedc98a02] Running
	I0421 20:13:36.100796   72192 system_pods.go:61] "kube-scheduler-flannel-474762" [798b1fa8-b941-4aaf-a0b2-d633bed69ee4] Running
	I0421 20:13:36.100800   72192 system_pods.go:61] "storage-provisioner" [2ae2cee4-1115-4004-8033-7c296c63d587] Running
	I0421 20:13:36.100809   72192 system_pods.go:74] duration metric: took 175.498295ms to wait for pod list to return data ...
	I0421 20:13:36.100818   72192 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:13:36.294758   72192 default_sa.go:45] found service account: "default"
	I0421 20:13:36.294786   72192 default_sa.go:55] duration metric: took 193.954425ms for default service account to be created ...
	I0421 20:13:36.294797   72192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:13:36.499318   72192 system_pods.go:86] 7 kube-system pods found
	I0421 20:13:36.499340   72192 system_pods.go:89] "coredns-7db6d8ff4d-2sh9b" [1f8f4071-8007-4f4b-8b9a-8b24f1548b3c] Running
	I0421 20:13:36.499346   72192 system_pods.go:89] "etcd-flannel-474762" [81d3d998-92c8-42b3-8c04-996a538e51ad] Running
	I0421 20:13:36.499350   72192 system_pods.go:89] "kube-apiserver-flannel-474762" [304ec604-34e2-4acf-9731-c02e79ed97af] Running
	I0421 20:13:36.499355   72192 system_pods.go:89] "kube-controller-manager-flannel-474762" [b8cd61c8-b9c3-4a1b-98da-becd00c4d3fe] Running
	I0421 20:13:36.499368   72192 system_pods.go:89] "kube-proxy-4gmfm" [b98d303b-12ea-4d1d-9c9c-768eedc98a02] Running
	I0421 20:13:36.499372   72192 system_pods.go:89] "kube-scheduler-flannel-474762" [798b1fa8-b941-4aaf-a0b2-d633bed69ee4] Running
	I0421 20:13:36.499376   72192 system_pods.go:89] "storage-provisioner" [2ae2cee4-1115-4004-8033-7c296c63d587] Running
	I0421 20:13:36.499382   72192 system_pods.go:126] duration metric: took 204.579054ms to wait for k8s-apps to be running ...
	I0421 20:13:36.499394   72192 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:13:36.499432   72192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:13:36.521795   72192 system_svc.go:56] duration metric: took 22.393237ms WaitForService to wait for kubelet
	I0421 20:13:36.521823   72192 kubeadm.go:576] duration metric: took 28.876017111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:13:36.521855   72192 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:13:36.695108   72192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:13:36.695135   72192 node_conditions.go:123] node cpu capacity is 2
	I0421 20:13:36.695154   72192 node_conditions.go:105] duration metric: took 173.293814ms to run NodePressure ...
	I0421 20:13:36.695167   72192 start.go:240] waiting for startup goroutines ...
	I0421 20:13:36.695176   72192 start.go:245] waiting for cluster config update ...
	I0421 20:13:36.695188   72192 start.go:254] writing updated cluster config ...
	I0421 20:13:36.695399   72192 ssh_runner.go:195] Run: rm -f paused
	I0421 20:13:36.760642   72192 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:13:36.763630   72192 out.go:177] * Done! kubectl is now configured to use "flannel-474762" cluster and "default" namespace by default
	I0421 20:13:37.330538   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:39.822415   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:41.824271   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:44.322460   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:44.830308   73732 pod_ready.go:97] pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.35 HostIPs:[{IP:192.168.50.
35}] PodIP: PodIPs:[] StartTime:2024-04-21 20:13:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:13:34 +0000 UTC,FinishedAt:2024-04-21 20:13:44 +0000 UTC,ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d Started:0xc002a76400 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:13:44.830348   73732 pod_ready.go:81] duration metric: took 12.01576578s for pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace to be "Ready" ...
	E0421 20:13:44.830363   73732 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-f8b6h" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-21 20:13:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.35 HostIPs:[{IP:192.168.50.35}] PodIP: PodIPs:[] StartTime:2024-04-21 20:13:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-21 20:13:34 +0000 UTC,FinishedAt:2024-04-21 20:13:44 +0000 UTC,ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9a2a92ef4290267871eed4e395be8df53b70ce38391fdce8d80eed0b1e2b391d Started:0xc002a76400 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0421 20:13:44.830375   73732 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace to be "Ready" ...
	I0421 20:13:46.838586   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:49.338750   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:51.837412   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:54.337103   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:56.338884   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:13:58.837198   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:00.838008   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:03.339443   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:05.339596   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:07.841981   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:10.337430   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:12.337457   73732 pod_ready.go:102] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:14:13.336935   73732 pod_ready.go:92] pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.336957   73732 pod_ready.go:81] duration metric: took 28.506572825s for pod "coredns-7db6d8ff4d-s2pv8" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.336966   73732 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.341378   73732 pod_ready.go:92] pod "etcd-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.341394   73732 pod_ready.go:81] duration metric: took 4.423034ms for pod "etcd-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.341402   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.346180   73732 pod_ready.go:92] pod "kube-apiserver-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.346203   73732 pod_ready.go:81] duration metric: took 4.795357ms for pod "kube-apiserver-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.346217   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.351294   73732 pod_ready.go:92] pod "kube-controller-manager-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.351314   73732 pod_ready.go:81] duration metric: took 5.086902ms for pod "kube-controller-manager-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.351323   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-7m4zl" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.355465   73732 pod_ready.go:92] pod "kube-proxy-7m4zl" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.355480   73732 pod_ready.go:81] duration metric: took 4.151092ms for pod "kube-proxy-7m4zl" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.355487   73732 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.734460   73732 pod_ready.go:92] pod "kube-scheduler-bridge-474762" in "kube-system" namespace has status "Ready":"True"
	I0421 20:14:13.734479   73732 pod_ready.go:81] duration metric: took 378.985254ms for pod "kube-scheduler-bridge-474762" in "kube-system" namespace to be "Ready" ...
	I0421 20:14:13.734490   73732 pod_ready.go:38] duration metric: took 40.995554584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:14:13.734502   73732 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:14:13.734546   73732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:14:13.751229   73732 api_server.go:72] duration metric: took 41.477977543s to wait for apiserver process to appear ...
	I0421 20:14:13.751246   73732 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:14:13.751261   73732 api_server.go:253] Checking apiserver healthz at https://192.168.50.35:8443/healthz ...
	I0421 20:14:13.755457   73732 api_server.go:279] https://192.168.50.35:8443/healthz returned 200:
	ok
	I0421 20:14:13.756384   73732 api_server.go:141] control plane version: v1.30.0
	I0421 20:14:13.756399   73732 api_server.go:131] duration metric: took 5.147985ms to wait for apiserver health ...
	I0421 20:14:13.756406   73732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:14:13.938964   73732 system_pods.go:59] 7 kube-system pods found
	I0421 20:14:13.939001   73732 system_pods.go:61] "coredns-7db6d8ff4d-s2pv8" [9cda56e7-d4f6-4810-959d-ecfba76f4bd1] Running
	I0421 20:14:13.939007   73732 system_pods.go:61] "etcd-bridge-474762" [181cb621-383f-4ede-b8a3-863219989782] Running
	I0421 20:14:13.939013   73732 system_pods.go:61] "kube-apiserver-bridge-474762" [1b718c38-3f70-484b-9444-75418197ac23] Running
	I0421 20:14:13.939018   73732 system_pods.go:61] "kube-controller-manager-bridge-474762" [92a8935f-63b0-46af-b84a-fee815747ad3] Running
	I0421 20:14:13.939023   73732 system_pods.go:61] "kube-proxy-7m4zl" [2d0cfcb1-bc45-4f18-a39c-008228494bf1] Running
	I0421 20:14:13.939027   73732 system_pods.go:61] "kube-scheduler-bridge-474762" [4bc919f5-fa83-4605-ab1a-c00a5fac7cb9] Running
	I0421 20:14:13.939032   73732 system_pods.go:61] "storage-provisioner" [c610bd1d-e889-464a-a081-c8b8379afe79] Running
	I0421 20:14:13.939039   73732 system_pods.go:74] duration metric: took 182.627504ms to wait for pod list to return data ...
	I0421 20:14:13.939049   73732 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:14:14.134340   73732 default_sa.go:45] found service account: "default"
	I0421 20:14:14.134374   73732 default_sa.go:55] duration metric: took 195.317449ms for default service account to be created ...
	I0421 20:14:14.134387   73732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:14:14.337437   73732 system_pods.go:86] 7 kube-system pods found
	I0421 20:14:14.337463   73732 system_pods.go:89] "coredns-7db6d8ff4d-s2pv8" [9cda56e7-d4f6-4810-959d-ecfba76f4bd1] Running
	I0421 20:14:14.337468   73732 system_pods.go:89] "etcd-bridge-474762" [181cb621-383f-4ede-b8a3-863219989782] Running
	I0421 20:14:14.337472   73732 system_pods.go:89] "kube-apiserver-bridge-474762" [1b718c38-3f70-484b-9444-75418197ac23] Running
	I0421 20:14:14.337476   73732 system_pods.go:89] "kube-controller-manager-bridge-474762" [92a8935f-63b0-46af-b84a-fee815747ad3] Running
	I0421 20:14:14.337480   73732 system_pods.go:89] "kube-proxy-7m4zl" [2d0cfcb1-bc45-4f18-a39c-008228494bf1] Running
	I0421 20:14:14.337483   73732 system_pods.go:89] "kube-scheduler-bridge-474762" [4bc919f5-fa83-4605-ab1a-c00a5fac7cb9] Running
	I0421 20:14:14.337487   73732 system_pods.go:89] "storage-provisioner" [c610bd1d-e889-464a-a081-c8b8379afe79] Running
	I0421 20:14:14.337493   73732 system_pods.go:126] duration metric: took 203.100247ms to wait for k8s-apps to be running ...
	I0421 20:14:14.337499   73732 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:14:14.337539   73732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:14:14.353672   73732 system_svc.go:56] duration metric: took 16.166964ms WaitForService to wait for kubelet
	I0421 20:14:14.353694   73732 kubeadm.go:576] duration metric: took 42.080447731s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:14:14.353709   73732 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:14:14.534010   73732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:14:14.534039   73732 node_conditions.go:123] node cpu capacity is 2
	I0421 20:14:14.534053   73732 node_conditions.go:105] duration metric: took 180.338582ms to run NodePressure ...
	I0421 20:14:14.534077   73732 start.go:240] waiting for startup goroutines ...
	I0421 20:14:14.534090   73732 start.go:245] waiting for cluster config update ...
	I0421 20:14:14.534107   73732 start.go:254] writing updated cluster config ...
	I0421 20:14:14.534423   73732 ssh_runner.go:195] Run: rm -f paused
	I0421 20:14:14.586510   73732 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:14:14.588612   73732 out.go:177] * Done! kubectl is now configured to use "bridge-474762" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.707825447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730868707800309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a537e6cb-00c7-4bc9-a2a3-761140a87e0e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.709741360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fe26eca-f138-4a5c-8b60-6341bd8da554 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.709814959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fe26eca-f138-4a5c-8b60-6341bd8da554 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.710069800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fe26eca-f138-4a5c-8b60-6341bd8da554 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.750519201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07009e0a-8377-4a6a-a656-1e94a05678c7 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.750707854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07009e0a-8377-4a6a-a656-1e94a05678c7 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.752018994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9777808-4ab2-4c69-9854-acbe4b899ece name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.752404251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730868752384265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9777808-4ab2-4c69-9854-acbe4b899ece name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.753013872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69d9e570-84ea-4667-97d6-716d3ea3f0a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.753097827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69d9e570-84ea-4667-97d6-716d3ea3f0a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.753291084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69d9e570-84ea-4667-97d6-716d3ea3f0a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.798458414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee20da39-313c-412c-aca4-20ee364ccad8 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.798614123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee20da39-313c-412c-aca4-20ee364ccad8 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.800636792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4eb6bfc2-2785-42f5-8f52-7161a64a82d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.801039308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730868801017857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4eb6bfc2-2785-42f5-8f52-7161a64a82d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.801724346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23a47d77-ef3a-437c-87d4-7efc636f0c15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.801840973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23a47d77-ef3a-437c-87d4-7efc636f0c15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.802028771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23a47d77-ef3a-437c-87d4-7efc636f0c15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.845676533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dc0e8c1-a0a1-4600-8a2d-aa1863ca89b2 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.845770379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dc0e8c1-a0a1-4600-8a2d-aa1863ca89b2 name=/runtime.v1.RuntimeService/Version
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.847005357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a28c593d-ecd6-4600-a2d2-4b390b6cf4f6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.847371913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713730868847351501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a28c593d-ecd6-4600-a2d2-4b390b6cf4f6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.847878604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91e6cbaf-1c46-4190-b498-70a05f456a33 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.847935220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91e6cbaf-1c46-4190-b498-70a05f456a33 name=/runtime.v1.RuntimeService/ListContainers
	Apr 21 20:21:08 embed-certs-727235 crio[724]: time="2024-04-21 20:21:08.848123525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6,PodSandboxId:d1912fd0d8eb365d87c9cd957c3fb2c1b78516f214f3c13a0adf126a091cc7b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713729920239459072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63784fb4-2205-4b24-94c8-b11015c21ed6,},Annotations:map[string]string{io.kubernetes.container.hash: 519cda3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b,PodSandboxId:4de3ba4c06be5da0224d5df5a88aa5a711da73b20bd1b2ee5e22ac2767f419a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919215504668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mjgjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d879b9e-8ab5-4ae6-9677-024c7172f9aa,},Annotations:map[string]string{io.kubernetes.container.hash: dcf0221d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853,PodSandboxId:d1573010f4048f7aa8e792a3acbe0c56d4386049528eb5bbb8275499b5ce4498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713729919272510405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b7p8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6baeec2-c553-460c-b19a-62c20d04eb00,},Annotations:map[string]string{io.kubernetes.container.hash: a2569f85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb,PodSandboxId:9eb8f5bc71da14b618b1781dbcfe8ea08d4bd1e5f02fc7bab43dbac418a341a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt
:1713729918250903973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zh4fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b4342b3-19be-43ce-9a60-27dfab04af45,},Annotations:map[string]string{io.kubernetes.container.hash: 15fdc1eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726,PodSandboxId:a90c949abbcf015020eff288df341305c4112625203e55f22999dbd37e9a3323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713729898564529547,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54510377ccd2a60e96c74dff2da57a4b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306,PodSandboxId:1fe7743526570e86722bdb84e750a67626c55827f2553ba6056dbcede0d875b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713729898629107321,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e97af5e69925b5117d9fefbd5b833efe,},Annotations:map[string]string{io.kubernetes.container.hash: 77c5b022,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3,PodSandboxId:3450b6ecd6cbf591146d1c549618d1ed33e4d5e2153628d1a2f0a2ec46612fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713729898523439787,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99479ab1e31b7cfb6110bdeeecfce62b,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f,PodSandboxId:ef232caeea042573161a600a039ec3f40aaf5fa9cef5db74a85b8f42017d6a5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713729898521037050,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-727235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9255d8dfbf191ba1c48f16fd936c6f4d,},Annotations:map[string]string{io.kubernetes.container.hash: b16b24a5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91e6cbaf-1c46-4190-b498-70a05f456a33 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97ead3853c312       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   d1912fd0d8eb3       storage-provisioner
	650fe46c897a4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   d1573010f4048       coredns-7db6d8ff4d-b7p8r
	410b67ad10f7c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   4de3ba4c06be5       coredns-7db6d8ff4d-mjgjp
	ae051d6fe30b2       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   15 minutes ago      Running             kube-proxy                0                   9eb8f5bc71da1       kube-proxy-zh4fs
	de24f31d2cd03       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   1fe7743526570       etcd-embed-certs-727235
	1d2911b2e722b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   16 minutes ago      Running             kube-scheduler            2                   a90c949abbcf0       kube-scheduler-embed-certs-727235
	7e5fa82e60b8f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   16 minutes ago      Running             kube-controller-manager   2                   3450b6ecd6cbf       kube-controller-manager-embed-certs-727235
	bc553514f919c       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   16 minutes ago      Running             kube-apiserver            2                   ef232caeea042       kube-apiserver-embed-certs-727235
	
	
	==> coredns [410b67ad10f7c6b1582c2150b3f8ee9084b1436e3f7ff8456888772f64dfd76b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [650fe46c897a489a9da72d971ee77aff3eeae836df4c4cdac3cb0dd806a55853] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-727235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-727235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=embed-certs-727235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T20_05_05_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 20:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-727235
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:21:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:20:44 +0000   Sun, 21 Apr 2024 20:04:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:20:44 +0000   Sun, 21 Apr 2024 20:04:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:20:44 +0000   Sun, 21 Apr 2024 20:04:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:20:44 +0000   Sun, 21 Apr 2024 20:05:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.9
	  Hostname:    embed-certs-727235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3397c26399140dfa6f25ac1a481f4c8
	  System UUID:                b3397c26-3991-40df-a6f2-5ac1a481f4c8
	  Boot ID:                    a6e1c195-555a-4656-b02f-464345d971da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-b7p8r                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-mjgjp                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-727235                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-727235             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-727235    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-zh4fs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-727235             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-2vwhn               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-727235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-727235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-727235 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-727235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-727235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-727235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-727235 event: Registered Node embed-certs-727235 in Controller
	
	
	==> dmesg <==
	[  +0.044082] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.804092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.565884] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.708468] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.720459] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.063024] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078029] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.204346] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.135931] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.322461] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[Apr21 20:00] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.064746] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.456659] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.628994] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.636557] kauditd_printk_skb: 79 callbacks suppressed
	[Apr21 20:04] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.840638] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[Apr21 20:05] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.115569] systemd-fstab-generator[3968]: Ignoring "noauto" option for root device
	[ +13.990862] systemd-fstab-generator[4172]: Ignoring "noauto" option for root device
	[  +0.090208] kauditd_printk_skb: 14 callbacks suppressed
	[Apr21 20:06] kauditd_printk_skb: 88 callbacks suppressed
	[Apr21 20:12] hrtimer: interrupt took 2576916 ns
	
	
	==> etcd [de24f31d2cd03908e4c1dc95d15f7c743e69acdfa7ba4635e94dcd6011a22306] <==
	{"level":"info","ts":"2024-04-21T20:10:23.982849Z","caller":"traceutil/trace.go:171","msg":"trace[970421004] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:730; }","duration":"326.541302ms","start":"2024-04-21T20:10:23.656295Z","end":"2024-04-21T20:10:23.982836Z","steps":["trace[970421004] 'agreement among raft nodes before linearized reading'  (duration: 326.18014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:23.983007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:10:23.656281Z","time spent":"326.710807ms","remote":"127.0.0.1:46036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":54,"response size":30,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"warn","ts":"2024-04-21T20:10:23.983076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.036359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:10:23.983133Z","caller":"traceutil/trace.go:171","msg":"trace[1700575669] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:730; }","duration":"348.118724ms","start":"2024-04-21T20:10:23.635003Z","end":"2024-04-21T20:10:23.983122Z","steps":["trace[1700575669] 'agreement among raft nodes before linearized reading'  (duration: 348.045477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:23.983164Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:10:23.634988Z","time spent":"348.169747ms","remote":"127.0.0.1:46160","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-04-21T20:10:59.001026Z","caller":"traceutil/trace.go:171","msg":"trace[1152693257] transaction","detail":"{read_only:false; response_revision:759; number_of_response:1; }","duration":"125.873603ms","start":"2024-04-21T20:10:58.875114Z","end":"2024-04-21T20:10:59.000988Z","steps":["trace[1152693257] 'process raft request'  (duration: 125.71521ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:59.001484Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.245638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:10:59.001632Z","caller":"traceutil/trace.go:171","msg":"trace[1388087734] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:759; }","duration":"117.376124ms","start":"2024-04-21T20:10:58.884174Z","end":"2024-04-21T20:10:59.00155Z","steps":["trace[1388087734] 'agreement among raft nodes before linearized reading'  (duration: 117.203095ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:10:59.001323Z","caller":"traceutil/trace.go:171","msg":"trace[747327944] linearizableReadLoop","detail":"{readStateIndex:841; appliedIndex:841; }","duration":"117.10792ms","start":"2024-04-21T20:10:58.884197Z","end":"2024-04-21T20:10:59.001305Z","steps":["trace[747327944] 'read index received'  (duration: 117.099776ms)","trace[747327944] 'applied index is now lower than readState.Index'  (duration: 7.014µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:10:59.390522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.552307ms","expected-duration":"100ms","prefix":"","request":"header:<ID:206196922829616332 > lease_revoke:<id:02dc8f024309b47f>","response":"size:28"}
	{"level":"info","ts":"2024-04-21T20:10:59.390669Z","caller":"traceutil/trace.go:171","msg":"trace[269270091] linearizableReadLoop","detail":"{readStateIndex:842; appliedIndex:841; }","duration":"387.708901ms","start":"2024-04-21T20:10:59.002946Z","end":"2024-04-21T20:10:59.390655Z","steps":["trace[269270091] 'read index received'  (duration: 66.934401ms)","trace[269270091] 'applied index is now lower than readState.Index'  (duration: 320.773334ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:10:59.390724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.764392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:10:59.390738Z","caller":"traceutil/trace.go:171","msg":"trace[1519781682] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:759; }","duration":"387.808333ms","start":"2024-04-21T20:10:59.002925Z","end":"2024-04-21T20:10:59.390733Z","steps":["trace[1519781682] 'agreement among raft nodes before linearized reading'  (duration: 387.763791ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:10:59.390789Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:10:59.002912Z","time spent":"387.851178ms","remote":"127.0.0.1:45968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-21T20:12:42.136452Z","caller":"traceutil/trace.go:171","msg":"trace[2027134893] transaction","detail":"{read_only:false; response_revision:842; number_of_response:1; }","duration":"352.967868ms","start":"2024-04-21T20:12:41.783434Z","end":"2024-04-21T20:12:42.136402Z","steps":["trace[2027134893] 'process raft request'  (duration: 352.812507ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:12:42.1368Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:12:41.783417Z","time spent":"353.119724ms","remote":"127.0.0.1:46132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:841 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-21T20:12:42.139179Z","caller":"traceutil/trace.go:171","msg":"trace[418760929] linearizableReadLoop","detail":"{readStateIndex:945; appliedIndex:945; }","duration":"252.699186ms","start":"2024-04-21T20:12:41.884849Z","end":"2024-04-21T20:12:42.137548Z","steps":["trace[418760929] 'read index received'  (duration: 252.694133ms)","trace[418760929] 'applied index is now lower than readState.Index'  (duration: 4.085µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:12:42.13937Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.515421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:12:42.13945Z","caller":"traceutil/trace.go:171","msg":"trace[2050415615] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:842; }","duration":"254.620928ms","start":"2024-04-21T20:12:41.884807Z","end":"2024-04-21T20:12:42.139428Z","steps":["trace[2050415615] 'agreement among raft nodes before linearized reading'  (duration: 254.519856ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:14:59.654207Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2024-04-21T20:14:59.664886Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":710,"took":"10.061156ms","hash":630062677,"current-db-size-bytes":2170880,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2170880,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-21T20:14:59.66492Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":630062677,"revision":710,"compact-revision":-1}
	{"level":"info","ts":"2024-04-21T20:19:59.662992Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":953}
	{"level":"info","ts":"2024-04-21T20:19:59.668246Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":953,"took":"4.098308ms","hash":2154402838,"current-db-size-bytes":2170880,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-04-21T20:19:59.668309Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2154402838,"revision":953,"compact-revision":710}
	
	
	==> kernel <==
	 20:21:09 up 21 min,  0 users,  load average: 0.15, 0.24, 0.22
	Linux embed-certs-727235 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bc553514f919cff61225014ba62433b57c1bc00d1b3ca5a7f6cf7a79569c489f] <==
	I0421 20:16:02.432118       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:18:02.431386       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:18:02.431522       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:18:02.431540       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:18:02.432823       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:18:02.432870       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:18:02.432879       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:20:01.435410       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:20:01.435617       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0421 20:20:02.435784       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:20:02.435845       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:20:02.435854       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:20:02.435993       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:20:02.436089       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:20:02.437394       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:21:02.436713       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:21:02.437451       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0421 20:21:02.437503       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0421 20:21:02.437727       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 20:21:02.437847       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0421 20:21:02.439196       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7e5fa82e60b8fd090fa5ef737991af5599885fe84d55d113f4869489753bb2a3] <==
	I0421 20:15:17.584797       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:15:47.078481       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:15:47.593334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:16:17.083759       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:16:17.601837       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0421 20:16:19.322750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="74.471µs"
	I0421 20:16:31.322746       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="106.147µs"
	E0421 20:16:47.087754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:16:47.609553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:17:17.094243       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:17:17.620028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:17:47.099021       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:17:47.627156       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:18:17.104468       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:18:17.635546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:18:47.109426       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:18:47.644397       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:19:17.116516       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:19:17.655340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:19:47.121696       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:19:47.663778       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:20:17.127375       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:20:17.672091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0421 20:20:47.132335       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0421 20:20:47.679321       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ae051d6fe30b27338aa746820cee85dda9cdb38ec5f7d0f6baa52d0216febecb] <==
	I0421 20:05:18.648882       1 server_linux.go:69] "Using iptables proxy"
	I0421 20:05:18.669011       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.9"]
	I0421 20:05:18.782720       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 20:05:18.782798       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 20:05:18.782822       1 server_linux.go:165] "Using iptables Proxier"
	I0421 20:05:18.786325       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 20:05:18.786508       1 server.go:872] "Version info" version="v1.30.0"
	I0421 20:05:18.786531       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 20:05:18.789709       1 config.go:319] "Starting node config controller"
	I0421 20:05:18.789720       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 20:05:18.789922       1 config.go:192] "Starting service config controller"
	I0421 20:05:18.789932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 20:05:18.789957       1 config.go:101] "Starting endpoint slice config controller"
	I0421 20:05:18.789960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 20:05:18.890628       1 shared_informer.go:320] Caches are synced for service config
	I0421 20:05:18.890695       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 20:05:18.890905       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1d2911b2e722b24fee324b6747b1d94880099939d58fe83f5dc68f8fecfd4726] <==
	W0421 20:05:02.393939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 20:05:02.393997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 20:05:02.457222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 20:05:02.457317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 20:05:02.475869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 20:05:02.476549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 20:05:02.476927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.477051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.522692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.522788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.539002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.539247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.566548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:02.568664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:02.587525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 20:05:02.587745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 20:05:02.627628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 20:05:02.627753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 20:05:02.775365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 20:05:02.775745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 20:05:02.849790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 20:05:02.849902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 20:05:03.012975       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 20:05:03.013147       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0421 20:05:05.059166       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:19:04 embed-certs-727235 kubelet[3974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:19:04 embed-certs-727235 kubelet[3974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:19:14 embed-certs-727235 kubelet[3974]: E0421 20:19:14.305917    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:19:25 embed-certs-727235 kubelet[3974]: E0421 20:19:25.306929    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:19:39 embed-certs-727235 kubelet[3974]: E0421 20:19:39.306530    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:19:51 embed-certs-727235 kubelet[3974]: E0421 20:19:51.306185    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:20:03 embed-certs-727235 kubelet[3974]: E0421 20:20:03.306317    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:20:04 embed-certs-727235 kubelet[3974]: E0421 20:20:04.332443    3974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:20:04 embed-certs-727235 kubelet[3974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:20:04 embed-certs-727235 kubelet[3974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:20:04 embed-certs-727235 kubelet[3974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:20:04 embed-certs-727235 kubelet[3974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:20:17 embed-certs-727235 kubelet[3974]: E0421 20:20:17.306121    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:20:28 embed-certs-727235 kubelet[3974]: E0421 20:20:28.305852    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:20:41 embed-certs-727235 kubelet[3974]: E0421 20:20:41.305775    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:20:54 embed-certs-727235 kubelet[3974]: E0421 20:20:54.310057    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	Apr 21 20:21:04 embed-certs-727235 kubelet[3974]: E0421 20:21:04.334752    3974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:21:04 embed-certs-727235 kubelet[3974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:21:04 embed-certs-727235 kubelet[3974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:21:04 embed-certs-727235 kubelet[3974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:21:04 embed-certs-727235 kubelet[3974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:21:09 embed-certs-727235 kubelet[3974]: E0421 20:21:09.319723    3974 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 21 20:21:09 embed-certs-727235 kubelet[3974]: E0421 20:21:09.319851    3974 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 21 20:21:09 embed-certs-727235 kubelet[3974]: E0421 20:21:09.320769    3974 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtt8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recur
siveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:fals
e,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-2vwhn_kube-system(4cb94623-a7b9-41e3-a6bc-fcc8b2856365): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 21 20:21:09 embed-certs-727235 kubelet[3974]: E0421 20:21:09.320831    3974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-2vwhn" podUID="4cb94623-a7b9-41e3-a6bc-fcc8b2856365"
	
	
	==> storage-provisioner [97ead3853c3124e869b5ba3d3871fd0da04e30bff9a3dc01e0560d2a9009adf6] <==
	I0421 20:05:20.339385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 20:05:20.354169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 20:05:20.354371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 20:05:20.367867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 20:05:20.368841       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aeb25cf9-c04b-4331-b76b-6c89e286eace", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-727235_90871dc4-1acd-4fad-8088-3c43628171d2 became leader
	I0421 20:05:20.370120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-727235_90871dc4-1acd-4fad-8088-3c43628171d2!
	I0421 20:05:20.477947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-727235_90871dc4-1acd-4fad-8088-3c43628171d2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-727235 -n embed-certs-727235
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-727235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2vwhn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-727235 describe pod metrics-server-569cc877fc-2vwhn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-727235 describe pod metrics-server-569cc877fc-2vwhn: exit status 1 (58.106832ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2vwhn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-727235 describe pod metrics-server-569cc877fc-2vwhn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (405.02s)

                                                
                                    

Test pass (252/317)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 48.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 13.23
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 151.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 214.03
29 TestAddons/parallel/Registry 22.46
31 TestAddons/parallel/InspektorGadget 11.84
33 TestAddons/parallel/HelmTiller 23.3
35 TestAddons/parallel/CSI 57.48
36 TestAddons/parallel/Headlamp 14.97
37 TestAddons/parallel/CloudSpanner 5.61
38 TestAddons/parallel/LocalPath 56.2
39 TestAddons/parallel/NvidiaDevicePlugin 5.54
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.11
45 TestCertOptions 51.96
46 TestCertExpiration 313.92
48 TestForceSystemdFlag 47.72
49 TestForceSystemdEnv 45.63
51 TestKVMDriverInstallOrUpdate 5.67
55 TestErrorSpam/setup 47.04
56 TestErrorSpam/start 0.36
57 TestErrorSpam/status 0.78
58 TestErrorSpam/pause 1.72
59 TestErrorSpam/unpause 1.81
60 TestErrorSpam/stop 5.53
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 62.18
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 76.48
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
72 TestFunctional/serial/CacheCmd/cache/add_local 2.55
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 36.34
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.57
83 TestFunctional/serial/LogsFileCmd 1.64
84 TestFunctional/serial/InvalidService 4.36
86 TestFunctional/parallel/ConfigCmd 0.42
87 TestFunctional/parallel/DashboardCmd 23.22
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.16
90 TestFunctional/parallel/StatusCmd 1.28
94 TestFunctional/parallel/ServiceCmdConnect 6.49
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 33.64
98 TestFunctional/parallel/SSHCmd 0.41
99 TestFunctional/parallel/CpCmd 1.53
100 TestFunctional/parallel/MySQL 29
101 TestFunctional/parallel/FileSync 0.23
102 TestFunctional/parallel/CertSync 1.84
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
110 TestFunctional/parallel/License 0.64
111 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/Version/components 0.8
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
118 TestFunctional/parallel/ImageCommands/ImageBuild 5.59
119 TestFunctional/parallel/ImageCommands/Setup 2.19
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.71
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
125 TestFunctional/parallel/ProfileCmd/profile_list 0.37
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
127 TestFunctional/parallel/MountCmd/any-port 23.61
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.38
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.29
130 TestFunctional/parallel/ServiceCmd/List 0.43
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
133 TestFunctional/parallel/ServiceCmd/Format 0.37
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.56
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.32
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.13
139 TestFunctional/parallel/MountCmd/specific-port 1.73
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 262.19
157 TestMultiControlPlane/serial/DeployApp 6.84
158 TestMultiControlPlane/serial/PingHostFromPods 1.37
159 TestMultiControlPlane/serial/AddWorkerNode 48.29
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
162 TestMultiControlPlane/serial/CopyFile 13.57
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.51
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.41
171 TestMultiControlPlane/serial/RestartCluster 358.04
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
173 TestMultiControlPlane/serial/AddSecondaryNode 72.87
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
178 TestJSONOutput/start/Command 97.3
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.77
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.7
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.45
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 97.19
210 TestMountStart/serial/StartWithMountFirst 29.19
211 TestMountStart/serial/VerifyMountFirst 0.38
212 TestMountStart/serial/StartWithMountSecond 28.46
213 TestMountStart/serial/VerifyMountSecond 0.39
214 TestMountStart/serial/DeleteFirst 0.86
215 TestMountStart/serial/VerifyMountPostDelete 0.39
216 TestMountStart/serial/Stop 1.45
217 TestMountStart/serial/RestartStopped 23.75
218 TestMountStart/serial/VerifyMountPostStop 0.38
221 TestMultiNode/serial/FreshStart2Nodes 111.53
222 TestMultiNode/serial/DeployApp2Nodes 5.37
223 TestMultiNode/serial/PingHostFrom2Pods 0.9
224 TestMultiNode/serial/AddNode 43.03
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.23
227 TestMultiNode/serial/CopyFile 7.67
228 TestMultiNode/serial/StopNode 2.53
229 TestMultiNode/serial/StartAfterStop 30.67
231 TestMultiNode/serial/DeleteNode 2.46
233 TestMultiNode/serial/RestartMultiNode 196.69
234 TestMultiNode/serial/ValidateNameConflict 49.32
241 TestScheduledStopUnix 118.07
245 TestRunningBinaryUpgrade 159.66
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
251 TestNoKubernetes/serial/StartWithK8s 126.09
252 TestNoKubernetes/serial/StartWithStopK8s 10.83
253 TestNoKubernetes/serial/Start 29.48
254 TestStoppedBinaryUpgrade/Setup 2.57
255 TestStoppedBinaryUpgrade/Upgrade 145.74
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
257 TestNoKubernetes/serial/ProfileList 1.05
258 TestNoKubernetes/serial/Stop 1.49
259 TestNoKubernetes/serial/StartNoArgs 23.35
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
270 TestPause/serial/Start 67.96
278 TestNetworkPlugins/group/false 3.21
282 TestPause/serial/SecondStartNoReconfiguration 56.67
286 TestStartStop/group/no-preload/serial/FirstStart 154.81
287 TestPause/serial/Pause 1.13
288 TestPause/serial/VerifyStatus 0.33
289 TestPause/serial/Unpause 0.8
290 TestPause/serial/PauseAgain 0.94
291 TestPause/serial/DeletePaused 0.87
292 TestPause/serial/VerifyDeletedResources 2.03
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.79
295 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
298 TestStartStop/group/no-preload/serial/DeployApp 11.31
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 685.66
306 TestStartStop/group/no-preload/serial/SecondStart 602.96
307 TestStartStop/group/old-k8s-version/serial/Stop 2.3
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/newest-cni/serial/FirstStart 171.27
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.4
314 TestStartStop/group/newest-cni/serial/Stop 7.39
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/newest-cni/serial/SecondStart 40.4
317 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/newest-cni/serial/Pause 2.52
322 TestStartStop/group/embed-certs/serial/FirstStart 62.32
323 TestStartStop/group/embed-certs/serial/DeployApp 10.28
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
328 TestStartStop/group/embed-certs/serial/SecondStart 629.89
335 TestNetworkPlugins/group/auto/Start 63.2
336 TestNetworkPlugins/group/kindnet/Start 98.13
337 TestNetworkPlugins/group/calico/Start 100.58
338 TestNetworkPlugins/group/auto/KubeletFlags 0.24
339 TestNetworkPlugins/group/auto/NetCatPod 13.29
340 TestNetworkPlugins/group/auto/DNS 0.18
341 TestNetworkPlugins/group/auto/Localhost 0.2
342 TestNetworkPlugins/group/auto/HairPin 0.15
343 TestNetworkPlugins/group/custom-flannel/Start 88.3
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
346 TestNetworkPlugins/group/kindnet/NetCatPod 12.31
347 TestNetworkPlugins/group/kindnet/DNS 0.2
348 TestNetworkPlugins/group/kindnet/Localhost 0.17
349 TestNetworkPlugins/group/kindnet/HairPin 0.16
350 TestNetworkPlugins/group/calico/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/KubeletFlags 0.39
352 TestNetworkPlugins/group/calico/NetCatPod 12.32
353 TestNetworkPlugins/group/enable-default-cni/Start 102.41
354 TestNetworkPlugins/group/calico/DNS 0.19
355 TestNetworkPlugins/group/calico/Localhost 0.16
356 TestNetworkPlugins/group/calico/HairPin 0.18
357 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
358 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
359 TestNetworkPlugins/group/flannel/Start 94.21
360 TestNetworkPlugins/group/custom-flannel/DNS 0.23
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
363 TestNetworkPlugins/group/bridge/Start 113.34
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.32
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
371 TestNetworkPlugins/group/flannel/NetCatPod 11.23
372 TestNetworkPlugins/group/flannel/DNS 0.19
373 TestNetworkPlugins/group/flannel/Localhost 0.14
374 TestNetworkPlugins/group/flannel/HairPin 0.15
375 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
376 TestNetworkPlugins/group/bridge/NetCatPod 12.23
378 TestNetworkPlugins/group/bridge/DNS 0.15
379 TestNetworkPlugins/group/bridge/Localhost 0.12
380 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (48.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-916770 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-916770 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (48.861524186s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (48.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-916770
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-916770: exit status 85 (71.123158ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-916770 | jenkins | v1.33.0 | 21 Apr 24 18:21 UTC |          |
	|         | -p download-only-916770        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:21:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:21:31.570422   11187 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:21:31.570557   11187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:21:31.570566   11187 out.go:304] Setting ErrFile to fd 2...
	I0421 18:21:31.570570   11187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:21:31.570768   11187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	W0421 18:21:31.570909   11187 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18702-3854/.minikube/config/config.json: open /home/jenkins/minikube-integration/18702-3854/.minikube/config/config.json: no such file or directory
	I0421 18:21:31.571438   11187 out.go:298] Setting JSON to true
	I0421 18:21:31.572256   11187 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":190,"bootTime":1713723502,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:21:31.572327   11187 start.go:139] virtualization: kvm guest
	I0421 18:21:31.574827   11187 out.go:97] [download-only-916770] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:21:31.576336   11187 out.go:169] MINIKUBE_LOCATION=18702
	I0421 18:21:31.574954   11187 notify.go:220] Checking for updates...
	W0421 18:21:31.574990   11187 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball: no such file or directory
	I0421 18:21:31.579049   11187 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:21:31.580345   11187 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:21:31.581626   11187 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:21:31.582862   11187 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0421 18:21:31.585697   11187 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0421 18:21:31.585911   11187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:21:31.690111   11187 out.go:97] Using the kvm2 driver based on user configuration
	I0421 18:21:31.690136   11187 start.go:297] selected driver: kvm2
	I0421 18:21:31.690141   11187 start.go:901] validating driver "kvm2" against <nil>
	I0421 18:21:31.690447   11187 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:21:31.690570   11187 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:21:31.705034   11187 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:21:31.705102   11187 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:21:31.705655   11187 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0421 18:21:31.705825   11187 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0421 18:21:31.705897   11187 cni.go:84] Creating CNI manager for ""
	I0421 18:21:31.705911   11187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:21:31.705919   11187 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 18:21:31.705998   11187 start.go:340] cluster config:
	{Name:download-only-916770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-916770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:21:31.706212   11187 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:21:31.708162   11187 out.go:97] Downloading VM boot image ...
	I0421 18:21:31.708190   11187 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0421 18:21:41.345562   11187 out.go:97] Starting "download-only-916770" primary control-plane node in "download-only-916770" cluster
	I0421 18:21:41.345594   11187 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 18:21:41.456224   11187 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:21:41.456271   11187 cache.go:56] Caching tarball of preloaded images
	I0421 18:21:41.456447   11187 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 18:21:41.458920   11187 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0421 18:21:41.458946   11187 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0421 18:21:41.568726   11187 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:21:53.380475   11187 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0421 18:21:53.380565   11187 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0421 18:21:54.288152   11187 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0421 18:21:54.288542   11187 profile.go:143] Saving config to /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/download-only-916770/config.json ...
	I0421 18:21:54.288573   11187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/download-only-916770/config.json: {Name:mkbc7d38400b52b5636eb4a06226eea7842a90ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:21:54.288733   11187 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0421 18:21:54.288905   11187 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-916770 host does not exist
	  To start a cluster, run: "minikube start -p download-only-916770"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-916770
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (13.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-287232 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-287232 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.229737206s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (13.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-287232
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-287232: exit status 85 (67.482555ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-916770 | jenkins | v1.33.0 | 21 Apr 24 18:21 UTC |                     |
	|         | -p download-only-916770        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| delete  | -p download-only-916770        | download-only-916770 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC | 21 Apr 24 18:22 UTC |
	| start   | -o=json --download-only        | download-only-287232 | jenkins | v1.33.0 | 21 Apr 24 18:22 UTC |                     |
	|         | -p download-only-287232        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:22:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:22:20.772862   11949 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:22:20.772967   11949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:22:20.772975   11949 out.go:304] Setting ErrFile to fd 2...
	I0421 18:22:20.772979   11949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:22:20.773158   11949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:22:20.773666   11949 out.go:298] Setting JSON to true
	I0421 18:22:20.774493   11949 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":239,"bootTime":1713723502,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:22:20.774547   11949 start.go:139] virtualization: kvm guest
	I0421 18:22:20.776899   11949 out.go:97] [download-only-287232] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:22:20.778549   11949 out.go:169] MINIKUBE_LOCATION=18702
	I0421 18:22:20.777092   11949 notify.go:220] Checking for updates...
	I0421 18:22:20.781631   11949 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:22:20.783110   11949 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:22:20.784678   11949 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:22:20.786368   11949 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0421 18:22:20.789561   11949 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0421 18:22:20.789873   11949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:22:20.820948   11949 out.go:97] Using the kvm2 driver based on user configuration
	I0421 18:22:20.820979   11949 start.go:297] selected driver: kvm2
	I0421 18:22:20.820989   11949 start.go:901] validating driver "kvm2" against <nil>
	I0421 18:22:20.821384   11949 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:22:20.821469   11949 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18702-3854/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0421 18:22:20.836171   11949 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0421 18:22:20.836250   11949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:22:20.836813   11949 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0421 18:22:20.836942   11949 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0421 18:22:20.836991   11949 cni.go:84] Creating CNI manager for ""
	I0421 18:22:20.837003   11949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0421 18:22:20.837010   11949 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 18:22:20.837072   11949 start.go:340] cluster config:
	{Name:download-only-287232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-287232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:22:20.837157   11949 iso.go:125] acquiring lock: {Name:mkcc127c99cb9de76f55e1497bea18abd579a402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:22:20.839003   11949 out.go:97] Starting "download-only-287232" primary control-plane node in "download-only-287232" cluster
	I0421 18:22:20.839029   11949 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:22:21.338308   11949 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0421 18:22:21.338341   11949 cache.go:56] Caching tarball of preloaded images
	I0421 18:22:21.338496   11949 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0421 18:22:21.340501   11949 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0421 18:22:21.340525   11949 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0421 18:22:21.450529   11949 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18702-3854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-287232 host does not exist
	  To start a cluster, run: "minikube start -p download-only-287232"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-287232
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-997979 --alsologtostderr --binary-mirror http://127.0.0.1:34105 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-997979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-997979
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (151.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-884831 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-884831 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m30.273180987s)
helpers_test.go:175: Cleaning up "offline-crio-884831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-884831
--- PASS: TestOffline (151.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-337450
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-337450: exit status 85 (74.506076ms)

                                                
                                                
-- stdout --
	* Profile "addons-337450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-337450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-337450
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-337450: exit status 85 (74.775112ms)

                                                
                                                
-- stdout --
	* Profile "addons-337450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-337450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (214.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-337450 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-337450 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m34.026430064s)
--- PASS: TestAddons/Setup (214.03s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 21.000331ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hqdlr" [5295efd0-2d0b-45a9-92f4-12ac59b9f395] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006013475s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-psfhr" [29887109-7168-4513-91b6-e2f7615b03d0] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005604311s
addons_test.go:340: (dbg) Run:  kubectl --context addons-337450 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-337450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-337450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.254209937s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 ip
2024/04/21 18:26:30 [DEBUG] GET http://192.168.39.51:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-337450 addons disable registry --alsologtostderr -v=1: (1.0155054s)
--- PASS: TestAddons/parallel/Registry (22.46s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vzhnv" [810c9541-dd87-447b-a83a-c9e5a275062d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005222177s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-337450
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-337450: (5.834833547s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (23.3s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 21.771594ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-lrdr7" [d0119b9a-443d-45f9-adeb-fc91c36d95a9] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.009834419s
addons_test.go:473: (dbg) Run:  kubectl --context addons-337450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-337450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (12.949386277s)
addons_test.go:478: kubectl --context addons-337450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:473: (dbg) Run:  kubectl --context addons-337450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-337450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.146073502s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (23.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 29.273074ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-337450 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-337450 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f7f987c6-218f-4222-967b-35e921a06972] Pending
helpers_test.go:344: "task-pv-pod" [f7f987c6-218f-4222-967b-35e921a06972] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f7f987c6-218f-4222-967b-35e921a06972] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004294394s
addons_test.go:584: (dbg) Run:  kubectl --context addons-337450 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-337450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-337450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-337450 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-337450 delete pod task-pv-pod: (1.429195348s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-337450 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-337450 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-337450 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f19ec6f0-d34d-436c-bc0d-b119beeb3042] Pending
helpers_test.go:344: "task-pv-pod-restore" [f19ec6f0-d34d-436c-bc0d-b119beeb3042] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f19ec6f0-d34d-436c-bc0d-b119beeb3042] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004632073s
addons_test.go:626: (dbg) Run:  kubectl --context addons-337450 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-337450 delete pod task-pv-pod-restore: (1.033866548s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-337450 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-337450 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-337450 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.864540605s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-337450 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-h8lsl" [b58d5b92-2bf6-4e12-b34b-478e60c90c28] Pending
helpers_test.go:344: "headlamp-7559bf459f-h8lsl" [b58d5b92-2bf6-4e12-b34b-478e60c90c28] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-h8lsl" [b58d5b92-2bf6-4e12-b34b-478e60c90c28] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004553008s
--- PASS: TestAddons/parallel/Headlamp (14.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-j7742" [28831a2b-7b5f-4b39-860f-dd7974b5c364] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004374326s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-337450
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-337450 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-337450 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3fc077bd-1fc7-4b5c-926d-b286c2c2afb6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3fc077bd-1fc7-4b5c-926d-b286c2c2afb6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3fc077bd-1fc7-4b5c-926d-b286c2c2afb6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.010140535s
addons_test.go:891: (dbg) Run:  kubectl --context addons-337450 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 ssh "cat /opt/local-path-provisioner/pvc-17b0f281-1dfd-4035-a69d-f977b9bf0dd8_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-337450 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-337450 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-337450 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-337450 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.388667808s)
--- PASS: TestAddons/parallel/LocalPath (56.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hggr8" [ab89f680-78cb-478b-929f-acea30c6e4c8] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006095134s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-337450
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-drwst" [6b583820-9a1d-4846-ad22-09785b6ab382] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00430483s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-337450 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-337450 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (51.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-015184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-015184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.369514939s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-015184 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-015184 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-015184 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-015184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-015184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-015184: (1.062934241s)
--- PASS: TestCertOptions (51.96s)

                                                
                                    
x
+
TestCertExpiration (313.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-942511 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-942511 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m9.506871261s)
E0421 19:33:49.251307   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 19:34:06.204969   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-942511 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-942511 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m3.384649775s)
helpers_test.go:175: Cleaning up "cert-expiration-942511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-942511
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-942511: (1.026580606s)
--- PASS: TestCertExpiration (313.92s)

                                                
                                    
x
+
TestForceSystemdFlag (47.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-267918 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-267918 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.520778408s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-267918 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-267918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-267918
--- PASS: TestForceSystemdFlag (47.72s)

                                                
                                    
x
+
TestForceSystemdEnv (45.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-923206 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-923206 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.617609131s)
helpers_test.go:175: Cleaning up "force-systemd-env-923206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-923206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-923206: (1.016762009s)
--- PASS: TestForceSystemdEnv (45.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.67s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.67s)

                                                
                                    
x
+
TestErrorSpam/setup (47.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-521641 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-521641 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-521641 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-521641 --driver=kvm2  --container-runtime=crio: (47.044250879s)
--- PASS: TestErrorSpam/setup (47.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (5.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 stop: (2.304059936s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 stop: (1.740258071s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-521641 --log_dir /tmp/nospam-521641 stop: (1.48774501s)
--- PASS: TestErrorSpam/stop (5.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18702-3854/.minikube/files/etc/test/nested/copy/11175/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977002 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0421 18:36:09.207737   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:09.214299   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:09.224466   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:09.245484   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:09.285882   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:09.366243   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:09.526668   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:09.847271   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:10.488272   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:11.768973   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:14.330754   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:19.451293   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:29.692367   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:36:50.173210   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-977002 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m2.181894359s)
--- PASS: TestFunctional/serial/StartWithProxy (62.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (76.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977002 --alsologtostderr -v=8
E0421 18:37:31.133734   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-977002 --alsologtostderr -v=8: (1m16.48206112s)
functional_test.go:659: soft start took 1m16.482897871s for "functional-977002" cluster.
--- PASS: TestFunctional/serial/SoftStart (76.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-977002 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 cache add registry.k8s.io/pause:3.1: (1.024484711s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 cache add registry.k8s.io/pause:3.3: (1.15076514s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 cache add registry.k8s.io/pause:latest: (1.071512111s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-977002 /tmp/TestFunctionalserialCacheCmdcacheadd_local3579149697/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cache add minikube-local-cache-test:functional-977002
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 cache add minikube-local-cache-test:functional-977002: (2.19647167s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cache delete minikube-local-cache-test:functional-977002
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-977002
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (240.228986ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 kubectl -- --context functional-977002 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-977002 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977002 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0421 18:38:53.054198   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-977002 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.343147328s)
functional_test.go:757: restart took 36.343253708s for "functional-977002" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.34s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-977002 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 logs: (1.567059727s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 logs --file /tmp/TestFunctionalserialLogsFileCmd2993616098/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 logs --file /tmp/TestFunctionalserialLogsFileCmd2993616098/001/logs.txt: (1.634222509s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-977002 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-977002
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-977002: exit status 115 (301.823619ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.104:32233 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-977002 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 config get cpus: exit status 14 (75.569311ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 config get cpus: exit status 14 (62.34203ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-977002 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-977002 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20857: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977002 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-977002 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (166.024208ms)

                                                
                                                
-- stdout --
	* [functional-977002] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:39:20.576415   20728 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:39:20.576776   20728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:39:20.576789   20728 out.go:304] Setting ErrFile to fd 2...
	I0421 18:39:20.576796   20728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:39:20.577004   20728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:39:20.577561   20728 out.go:298] Setting JSON to false
	I0421 18:39:20.578667   20728 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1259,"bootTime":1713723502,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:39:20.578733   20728 start.go:139] virtualization: kvm guest
	I0421 18:39:20.580575   20728 out.go:177] * [functional-977002] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 18:39:20.582806   20728 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:39:20.582780   20728 notify.go:220] Checking for updates...
	I0421 18:39:20.584603   20728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:39:20.586612   20728 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:39:20.588510   20728 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:39:20.590089   20728 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:39:20.591279   20728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:39:20.593368   20728 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:39:20.593981   20728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:39:20.594038   20728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:39:20.612851   20728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0421 18:39:20.613357   20728 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:39:20.613925   20728 main.go:141] libmachine: Using API Version  1
	I0421 18:39:20.613943   20728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:39:20.614257   20728 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:39:20.614427   20728 main.go:141] libmachine: (functional-977002) Calling .DriverName
	I0421 18:39:20.614704   20728 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:39:20.615048   20728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:39:20.615101   20728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:39:20.630289   20728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0421 18:39:20.630796   20728 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:39:20.631335   20728 main.go:141] libmachine: Using API Version  1
	I0421 18:39:20.631359   20728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:39:20.631633   20728 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:39:20.631852   20728 main.go:141] libmachine: (functional-977002) Calling .DriverName
	I0421 18:39:20.666050   20728 out.go:177] * Using the kvm2 driver based on existing profile
	I0421 18:39:20.667706   20728 start.go:297] selected driver: kvm2
	I0421 18:39:20.667739   20728 start.go:901] validating driver "kvm2" against &{Name:functional-977002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:functional-977002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:39:20.667872   20728 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:39:20.669947   20728 out.go:177] 
	W0421 18:39:20.671361   20728 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0421 18:39:20.672879   20728 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977002 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-977002 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-977002 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.145078ms)

                                                
                                                
-- stdout --
	* [functional-977002] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 18:39:20.884586   20782 out.go:291] Setting OutFile to fd 1 ...
	I0421 18:39:20.884700   20782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:39:20.884714   20782 out.go:304] Setting ErrFile to fd 2...
	I0421 18:39:20.884719   20782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:39:20.884981   20782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 18:39:20.885483   20782 out.go:298] Setting JSON to false
	I0421 18:39:20.886361   20782 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1259,"bootTime":1713723502,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 18:39:20.886438   20782 start.go:139] virtualization: kvm guest
	I0421 18:39:20.888551   20782 out.go:177] * [functional-977002] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0421 18:39:20.890433   20782 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:39:20.891778   20782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:39:20.890441   20782 notify.go:220] Checking for updates...
	I0421 18:39:20.894228   20782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 18:39:20.895631   20782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 18:39:20.897055   20782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 18:39:20.898285   20782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:39:20.899807   20782 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 18:39:20.900220   20782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:39:20.900259   20782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:39:20.915486   20782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0421 18:39:20.915998   20782 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:39:20.916681   20782 main.go:141] libmachine: Using API Version  1
	I0421 18:39:20.916714   20782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:39:20.917101   20782 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:39:20.917297   20782 main.go:141] libmachine: (functional-977002) Calling .DriverName
	I0421 18:39:20.917588   20782 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:39:20.917881   20782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 18:39:20.917919   20782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 18:39:20.932926   20782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0421 18:39:20.933609   20782 main.go:141] libmachine: () Calling .GetVersion
	I0421 18:39:20.934374   20782 main.go:141] libmachine: Using API Version  1
	I0421 18:39:20.934453   20782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 18:39:20.934859   20782 main.go:141] libmachine: () Calling .GetMachineName
	I0421 18:39:20.935380   20782 main.go:141] libmachine: (functional-977002) Calling .DriverName
	I0421 18:39:20.970988   20782 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0421 18:39:20.972515   20782 start.go:297] selected driver: kvm2
	I0421 18:39:20.972532   20782 start.go:901] validating driver "kvm2" against &{Name:functional-977002 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:functional-977002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:39:20.972680   20782 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:39:20.975139   20782 out.go:177] 
	W0421 18:39:20.976497   20782 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0421 18:39:20.977779   20782 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-977002 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-977002 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-fw5sn" [586eea2a-ce1c-4107-a129-4f8d4e66a2d0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-fw5sn" [586eea2a-ce1c-4107-a129-4f8d4e66a2d0] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.008133001s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.104:30205
functional_test.go:1671: http://192.168.39.104:30205: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-fw5sn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.104:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.104:30205
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8bd65523-63f4-4264-be76-6886e8e508e3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006355705s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-977002 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-977002 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-977002 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-977002 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-977002 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dfeaddbb-4693-4118-99e2-4566434e57da] Pending
helpers_test.go:344: "sp-pod" [dfeaddbb-4693-4118-99e2-4566434e57da] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dfeaddbb-4693-4118-99e2-4566434e57da] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005673357s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-977002 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-977002 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-977002 delete -f testdata/storage-provisioner/pod.yaml: (1.055071207s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-977002 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [290746fa-ef9a-4591-b9bd-481cdce82301] Pending
helpers_test.go:344: "sp-pod" [290746fa-ef9a-4591-b9bd-481cdce82301] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [290746fa-ef9a-4591-b9bd-481cdce82301] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004464781s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-977002 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh -n functional-977002 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cp functional-977002:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2436751745/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh -n functional-977002 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh -n functional-977002 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-977002 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-vxc9b" [55258866-d217-4530-8475-d30736ea393f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-vxc9b" [55258866-d217-4530-8475-d30736ea393f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.020653863s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-977002 exec mysql-64454c8b5c-vxc9b -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-977002 exec mysql-64454c8b5c-vxc9b -- mysql -ppassword -e "show databases;": exit status 1 (167.880174ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-977002 exec mysql-64454c8b5c-vxc9b -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-977002 exec mysql-64454c8b5c-vxc9b -- mysql -ppassword -e "show databases;": exit status 1 (235.299659ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-977002 exec mysql-64454c8b5c-vxc9b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11175/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo cat /etc/test/nested/copy/11175/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11175.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo cat /etc/ssl/certs/11175.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11175.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo cat /usr/share/ca-certificates/11175.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/111752.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo cat /etc/ssl/certs/111752.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/111752.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo cat /usr/share/ca-certificates/111752.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-977002 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh "sudo systemctl is-active docker": exit status 1 (242.299464ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh "sudo systemctl is-active containerd": exit status 1 (253.237991ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-977002 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-977002 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-5k48s" [8a6135b9-ca2f-438d-b919-c4bcf31e50bc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-5k48s" [8a6135b9-ca2f-438d-b919-c4bcf31e50bc] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003936399s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977002 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-977002
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-977002
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977002 image ls --format short --alsologtostderr:
I0421 18:39:39.353720   21788 out.go:291] Setting OutFile to fd 1 ...
I0421 18:39:39.354029   21788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:39.354044   21788 out.go:304] Setting ErrFile to fd 2...
I0421 18:39:39.354051   21788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:39.354374   21788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
I0421 18:39:39.355159   21788 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:39.355302   21788 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:39.355900   21788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:39.355954   21788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:39.372023   21788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
I0421 18:39:39.372505   21788 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:39.373189   21788 main.go:141] libmachine: Using API Version  1
I0421 18:39:39.373215   21788 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:39.373570   21788 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:39.373746   21788 main.go:141] libmachine: (functional-977002) Calling .GetState
I0421 18:39:39.375629   21788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:39.375668   21788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:39.391042   21788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
I0421 18:39:39.391582   21788 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:39.392149   21788 main.go:141] libmachine: Using API Version  1
I0421 18:39:39.392176   21788 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:39.392559   21788 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:39.392799   21788 main.go:141] libmachine: (functional-977002) Calling .DriverName
I0421 18:39:39.393024   21788 ssh_runner.go:195] Run: systemctl --version
I0421 18:39:39.393051   21788 main.go:141] libmachine: (functional-977002) Calling .GetSSHHostname
I0421 18:39:39.395582   21788 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:39.395949   21788 main.go:141] libmachine: (functional-977002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d8:e5", ip: ""} in network mk-functional-977002: {Iface:virbr1 ExpiryTime:2024-04-21 19:36:11 +0000 UTC Type:0 Mac:52:54:00:c3:d8:e5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:functional-977002 Clientid:01:52:54:00:c3:d8:e5}
I0421 18:39:39.395975   21788 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined IP address 192.168.39.104 and MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:39.396110   21788 main.go:141] libmachine: (functional-977002) Calling .GetSSHPort
I0421 18:39:39.396272   21788 main.go:141] libmachine: (functional-977002) Calling .GetSSHKeyPath
I0421 18:39:39.396447   21788 main.go:141] libmachine: (functional-977002) Calling .GetSSHUsername
I0421 18:39:39.396577   21788 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/functional-977002/id_rsa Username:docker}
I0421 18:39:39.518891   21788 ssh_runner.go:195] Run: sudo crictl images --output json
I0421 18:39:39.606370   21788 main.go:141] libmachine: Making call to close driver server
I0421 18:39:39.606400   21788 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:39.606767   21788 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:39.606784   21788 main.go:141] libmachine: Making call to close connection to plugin binary
I0421 18:39:39.606793   21788 main.go:141] libmachine: Making call to close driver server
I0421 18:39:39.606800   21788 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:39.606881   21788 main.go:141] libmachine: (functional-977002) DBG | Closing plugin on server side
I0421 18:39:39.607057   21788 main.go:141] libmachine: (functional-977002) DBG | Closing plugin on server side
I0421 18:39:39.607067   21788 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:39.607111   21788 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977002 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/google-containers/addon-resizer  | functional-977002  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-977002  | cca9fdf918987 | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977002 image ls --format table --alsologtostderr:
I0421 18:39:43.294126   22016 out.go:291] Setting OutFile to fd 1 ...
I0421 18:39:43.294259   22016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:43.294269   22016 out.go:304] Setting ErrFile to fd 2...
I0421 18:39:43.294275   22016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:43.294474   22016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
I0421 18:39:43.295060   22016 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:43.295170   22016 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:43.295539   22016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:43.295588   22016 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:43.309860   22016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
I0421 18:39:43.310306   22016 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:43.310826   22016 main.go:141] libmachine: Using API Version  1
I0421 18:39:43.310847   22016 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:43.311144   22016 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:43.311313   22016 main.go:141] libmachine: (functional-977002) Calling .GetState
I0421 18:39:43.312918   22016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:43.312956   22016 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:43.327018   22016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42873
I0421 18:39:43.327410   22016 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:43.327871   22016 main.go:141] libmachine: Using API Version  1
I0421 18:39:43.327891   22016 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:43.328145   22016 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:43.328312   22016 main.go:141] libmachine: (functional-977002) Calling .DriverName
I0421 18:39:43.328476   22016 ssh_runner.go:195] Run: systemctl --version
I0421 18:39:43.328496   22016 main.go:141] libmachine: (functional-977002) Calling .GetSSHHostname
I0421 18:39:43.330732   22016 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:43.331074   22016 main.go:141] libmachine: (functional-977002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d8:e5", ip: ""} in network mk-functional-977002: {Iface:virbr1 ExpiryTime:2024-04-21 19:36:11 +0000 UTC Type:0 Mac:52:54:00:c3:d8:e5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:functional-977002 Clientid:01:52:54:00:c3:d8:e5}
I0421 18:39:43.331101   22016 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined IP address 192.168.39.104 and MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:43.331240   22016 main.go:141] libmachine: (functional-977002) Calling .GetSSHPort
I0421 18:39:43.331409   22016 main.go:141] libmachine: (functional-977002) Calling .GetSSHKeyPath
I0421 18:39:43.331556   22016 main.go:141] libmachine: (functional-977002) Calling .GetSSHUsername
I0421 18:39:43.331672   22016 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/functional-977002/id_rsa Username:docker}
I0421 18:39:43.418230   22016 ssh_runner.go:195] Run: sudo crictl images --output json
I0421 18:39:43.468422   22016 main.go:141] libmachine: Making call to close driver server
I0421 18:39:43.468444   22016 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:43.468730   22016 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:43.468763   22016 main.go:141] libmachine: Making call to close connection to plugin binary
I0421 18:39:43.468774   22016 main.go:141] libmachine: (functional-977002) DBG | Closing plugin on server side
I0421 18:39:43.468779   22016 main.go:141] libmachine: Making call to close driver server
I0421 18:39:43.468789   22016 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:43.469024   22016 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:43.469039   22016 main.go:141] libmachine: Making call to close connection to plugin binary
2024/04/21 18:39:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977002 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","rep
oDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/
storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a
87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"112170310"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"re
poTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"cca9fdf918987eb47ba7231d18e3e44675bd8c31255e668f0a0b12cf24abfab4","repoDigests":["localhost/minikube-local-cache-test@sha256:99f
eb704e7e962d5dd686b19e3d2ad6d6d5b59ded4186f3f6664676ed26d6e1c"],"repoTags":["localhost/minikube-local-cache-test:functional-977002"],"size":"3328"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-977002"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/
busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977002 image ls --format json --alsologtostderr:
I0421 18:39:43.064150   21991 out.go:291] Setting OutFile to fd 1 ...
I0421 18:39:43.064376   21991 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:43.064384   21991 out.go:304] Setting ErrFile to fd 2...
I0421 18:39:43.064388   21991 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:43.064570   21991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
I0421 18:39:43.065091   21991 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:43.065178   21991 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:43.065519   21991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:43.065563   21991 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:43.080662   21991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
I0421 18:39:43.081108   21991 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:43.081655   21991 main.go:141] libmachine: Using API Version  1
I0421 18:39:43.081678   21991 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:43.081966   21991 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:43.082135   21991 main.go:141] libmachine: (functional-977002) Calling .GetState
I0421 18:39:43.083756   21991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:43.083787   21991 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:43.098120   21991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
I0421 18:39:43.098464   21991 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:43.098882   21991 main.go:141] libmachine: Using API Version  1
I0421 18:39:43.098904   21991 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:43.099254   21991 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:43.099438   21991 main.go:141] libmachine: (functional-977002) Calling .DriverName
I0421 18:39:43.099639   21991 ssh_runner.go:195] Run: systemctl --version
I0421 18:39:43.099668   21991 main.go:141] libmachine: (functional-977002) Calling .GetSSHHostname
I0421 18:39:43.102515   21991 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:43.102895   21991 main.go:141] libmachine: (functional-977002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d8:e5", ip: ""} in network mk-functional-977002: {Iface:virbr1 ExpiryTime:2024-04-21 19:36:11 +0000 UTC Type:0 Mac:52:54:00:c3:d8:e5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:functional-977002 Clientid:01:52:54:00:c3:d8:e5}
I0421 18:39:43.102925   21991 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined IP address 192.168.39.104 and MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:43.103068   21991 main.go:141] libmachine: (functional-977002) Calling .GetSSHPort
I0421 18:39:43.103210   21991 main.go:141] libmachine: (functional-977002) Calling .GetSSHKeyPath
I0421 18:39:43.103381   21991 main.go:141] libmachine: (functional-977002) Calling .GetSSHUsername
I0421 18:39:43.103491   21991 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/functional-977002/id_rsa Username:docker}
I0421 18:39:43.189894   21991 ssh_runner.go:195] Run: sudo crictl images --output json
I0421 18:39:43.236320   21991 main.go:141] libmachine: Making call to close driver server
I0421 18:39:43.236343   21991 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:43.236644   21991 main.go:141] libmachine: (functional-977002) DBG | Closing plugin on server side
I0421 18:39:43.236699   21991 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:43.236710   21991 main.go:141] libmachine: Making call to close connection to plugin binary
I0421 18:39:43.236728   21991 main.go:141] libmachine: Making call to close driver server
I0421 18:39:43.236737   21991 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:43.236957   21991 main.go:141] libmachine: (functional-977002) DBG | Closing plugin on server side
I0421 18:39:43.236964   21991 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:43.237024   21991 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977002 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-977002
size: "34114467"
- id: cca9fdf918987eb47ba7231d18e3e44675bd8c31255e668f0a0b12cf24abfab4
repoDigests:
- localhost/minikube-local-cache-test@sha256:99feb704e7e962d5dd686b19e3d2ad6d6d5b59ded4186f3f6664676ed26d6e1c
repoTags:
- localhost/minikube-local-cache-test:functional-977002
size: "3328"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977002 image ls --format yaml --alsologtostderr:
I0421 18:39:39.671911   21812 out.go:291] Setting OutFile to fd 1 ...
I0421 18:39:39.672049   21812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:39.672059   21812 out.go:304] Setting ErrFile to fd 2...
I0421 18:39:39.672065   21812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:39.672285   21812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
I0421 18:39:39.672865   21812 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:39.672975   21812 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:39.673381   21812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:39.673431   21812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:39.688059   21812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
I0421 18:39:39.688582   21812 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:39.689283   21812 main.go:141] libmachine: Using API Version  1
I0421 18:39:39.689307   21812 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:39.689676   21812 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:39.689870   21812 main.go:141] libmachine: (functional-977002) Calling .GetState
I0421 18:39:39.691708   21812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:39.691760   21812 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:39.706988   21812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
I0421 18:39:39.707390   21812 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:39.707910   21812 main.go:141] libmachine: Using API Version  1
I0421 18:39:39.707937   21812 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:39.708281   21812 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:39.708461   21812 main.go:141] libmachine: (functional-977002) Calling .DriverName
I0421 18:39:39.708695   21812 ssh_runner.go:195] Run: systemctl --version
I0421 18:39:39.708723   21812 main.go:141] libmachine: (functional-977002) Calling .GetSSHHostname
I0421 18:39:39.711294   21812 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:39.711684   21812 main.go:141] libmachine: (functional-977002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d8:e5", ip: ""} in network mk-functional-977002: {Iface:virbr1 ExpiryTime:2024-04-21 19:36:11 +0000 UTC Type:0 Mac:52:54:00:c3:d8:e5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:functional-977002 Clientid:01:52:54:00:c3:d8:e5}
I0421 18:39:39.711717   21812 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined IP address 192.168.39.104 and MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:39.711866   21812 main.go:141] libmachine: (functional-977002) Calling .GetSSHPort
I0421 18:39:39.712053   21812 main.go:141] libmachine: (functional-977002) Calling .GetSSHKeyPath
I0421 18:39:39.712180   21812 main.go:141] libmachine: (functional-977002) Calling .GetSSHUsername
I0421 18:39:39.712298   21812 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/functional-977002/id_rsa Username:docker}
I0421 18:39:39.822022   21812 ssh_runner.go:195] Run: sudo crictl images --output json
I0421 18:39:39.902771   21812 main.go:141] libmachine: Making call to close driver server
I0421 18:39:39.902788   21812 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:39.903085   21812 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:39.903105   21812 main.go:141] libmachine: Making call to close connection to plugin binary
I0421 18:39:39.903121   21812 main.go:141] libmachine: Making call to close driver server
I0421 18:39:39.903130   21812 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:39.903355   21812 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:39.903371   21812 main.go:141] libmachine: Making call to close connection to plugin binary
I0421 18:39:39.903373   21812 main.go:141] libmachine: (functional-977002) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh pgrep buildkitd: exit status 1 (263.572455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image build -t localhost/my-image:functional-977002 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 image build -t localhost/my-image:functional-977002 testdata/build --alsologtostderr: (5.067464462s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-977002 image build -t localhost/my-image:functional-977002 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cbc44b9b715
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-977002
--> f54e427574a
Successfully tagged localhost/my-image:functional-977002
f54e427574af3ff73fc842881a9ae44fbc0aa3ce8e16e31eb33332eef468f28d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-977002 image build -t localhost/my-image:functional-977002 testdata/build --alsologtostderr:
I0421 18:39:40.229993   21882 out.go:291] Setting OutFile to fd 1 ...
I0421 18:39:40.230163   21882 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:40.230173   21882 out.go:304] Setting ErrFile to fd 2...
I0421 18:39:40.230177   21882 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:39:40.230379   21882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
I0421 18:39:40.230969   21882 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:40.231608   21882 config.go:182] Loaded profile config "functional-977002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0421 18:39:40.232215   21882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:40.232294   21882 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:40.247947   21882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
I0421 18:39:40.248485   21882 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:40.249128   21882 main.go:141] libmachine: Using API Version  1
I0421 18:39:40.249173   21882 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:40.249588   21882 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:40.249786   21882 main.go:141] libmachine: (functional-977002) Calling .GetState
I0421 18:39:40.251919   21882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0421 18:39:40.251968   21882 main.go:141] libmachine: Launching plugin server for driver kvm2
I0421 18:39:40.267206   21882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
I0421 18:39:40.267657   21882 main.go:141] libmachine: () Calling .GetVersion
I0421 18:39:40.268218   21882 main.go:141] libmachine: Using API Version  1
I0421 18:39:40.268239   21882 main.go:141] libmachine: () Calling .SetConfigRaw
I0421 18:39:40.268555   21882 main.go:141] libmachine: () Calling .GetMachineName
I0421 18:39:40.268731   21882 main.go:141] libmachine: (functional-977002) Calling .DriverName
I0421 18:39:40.268903   21882 ssh_runner.go:195] Run: systemctl --version
I0421 18:39:40.268925   21882 main.go:141] libmachine: (functional-977002) Calling .GetSSHHostname
I0421 18:39:40.271931   21882 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:40.272347   21882 main.go:141] libmachine: (functional-977002) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d8:e5", ip: ""} in network mk-functional-977002: {Iface:virbr1 ExpiryTime:2024-04-21 19:36:11 +0000 UTC Type:0 Mac:52:54:00:c3:d8:e5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:functional-977002 Clientid:01:52:54:00:c3:d8:e5}
I0421 18:39:40.272380   21882 main.go:141] libmachine: (functional-977002) DBG | domain functional-977002 has defined IP address 192.168.39.104 and MAC address 52:54:00:c3:d8:e5 in network mk-functional-977002
I0421 18:39:40.272455   21882 main.go:141] libmachine: (functional-977002) Calling .GetSSHPort
I0421 18:39:40.272633   21882 main.go:141] libmachine: (functional-977002) Calling .GetSSHKeyPath
I0421 18:39:40.272773   21882 main.go:141] libmachine: (functional-977002) Calling .GetSSHUsername
I0421 18:39:40.272897   21882 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/functional-977002/id_rsa Username:docker}
I0421 18:39:40.390672   21882 build_images.go:161] Building image from path: /tmp/build.1504225600.tar
I0421 18:39:40.390784   21882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0421 18:39:40.421743   21882 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1504225600.tar
I0421 18:39:40.439386   21882 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1504225600.tar: stat -c "%s %y" /var/lib/minikube/build/build.1504225600.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1504225600.tar': No such file or directory
I0421 18:39:40.439435   21882 ssh_runner.go:362] scp /tmp/build.1504225600.tar --> /var/lib/minikube/build/build.1504225600.tar (3072 bytes)
I0421 18:39:40.533155   21882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1504225600
I0421 18:39:40.571938   21882 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1504225600 -xf /var/lib/minikube/build/build.1504225600.tar
I0421 18:39:40.584053   21882 crio.go:315] Building image: /var/lib/minikube/build/build.1504225600
I0421 18:39:40.584124   21882 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-977002 /var/lib/minikube/build/build.1504225600 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0421 18:39:45.206529   21882 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-977002 /var/lib/minikube/build/build.1504225600 --cgroup-manager=cgroupfs: (4.622380853s)
I0421 18:39:45.206600   21882 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1504225600
I0421 18:39:45.221281   21882 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1504225600.tar
I0421 18:39:45.233355   21882 build_images.go:217] Built localhost/my-image:functional-977002 from /tmp/build.1504225600.tar
I0421 18:39:45.233395   21882 build_images.go:133] succeeded building to: functional-977002
I0421 18:39:45.233401   21882 build_images.go:134] failed building to: 
I0421 18:39:45.233425   21882 main.go:141] libmachine: Making call to close driver server
I0421 18:39:45.233436   21882 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:45.233773   21882 main.go:141] libmachine: (functional-977002) DBG | Closing plugin on server side
I0421 18:39:45.233777   21882 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:45.233803   21882 main.go:141] libmachine: Making call to close connection to plugin binary
I0421 18:39:45.233818   21882 main.go:141] libmachine: Making call to close driver server
I0421 18:39:45.233827   21882 main.go:141] libmachine: (functional-977002) Calling .Close
I0421 18:39:45.234076   21882 main.go:141] libmachine: Successfully made call to close driver server
I0421 18:39:45.234091   21882 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.171331433s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-977002
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image load --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 image load --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr: (4.459337947s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "309.302346ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "63.902648ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "247.110297ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "59.415104ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdany-port641344658/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713724751372837828" to /tmp/TestFunctionalparallelMountCmdany-port641344658/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713724751372837828" to /tmp/TestFunctionalparallelMountCmdany-port641344658/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713724751372837828" to /tmp/TestFunctionalparallelMountCmdany-port641344658/001/test-1713724751372837828
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.135751ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 21 18:39 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 21 18:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 21 18:39 test-1713724751372837828
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh cat /mount-9p/test-1713724751372837828
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-977002 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5ffc3b33-1a55-4470-8ac0-e98c07d1cdac] Pending
helpers_test.go:344: "busybox-mount" [5ffc3b33-1a55-4470-8ac0-e98c07d1cdac] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5ffc3b33-1a55-4470-8ac0-e98c07d1cdac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5ffc3b33-1a55-4470-8ac0-e98c07d1cdac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 21.005132614s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-977002 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdany-port641344658/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image load --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 image load --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr: (2.932299679s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.291733325s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-977002
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image load --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 image load --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr: (9.73125558s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 service list -o json
functional_test.go:1490: Took "592.987478ms" to run "out/minikube-linux-amd64 -p functional-977002 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.104:31101
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.104:31101
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image save gcr.io/google-containers/addon-resizer:functional-977002 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 image save gcr.io/google-containers/addon-resizer:functional-977002 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.562368527s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image rm gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.877484486s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-977002
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 image save --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-977002 image save --daemon gcr.io/google-containers/addon-resizer:functional-977002 --alsologtostderr: (2.090240676s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-977002
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdspecific-port2053460416/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.82091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdspecific-port2053460416/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh "sudo umount -f /mount-9p": exit status 1 (218.530677ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-977002 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdspecific-port2053460416/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2358108349/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2358108349/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2358108349/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T" /mount1: exit status 1 (292.42793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-977002 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-977002 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2358108349/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2358108349/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-977002 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2358108349/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-977002
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-977002
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-977002
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (262.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-113226 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0421 18:41:09.207916   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:41:36.894504   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 18:44:06.204503   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:06.209803   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:06.220112   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:06.240421   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:06.280744   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:06.361084   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:06.521539   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:06.841828   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:07.482782   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:08.763308   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:11.324052   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:16.444761   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:44:26.685701   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-113226 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m21.50131177s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (262.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-113226 -- rollout status deployment/busybox: (4.37707905s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-djlm5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-lccdt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-vvhg8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-djlm5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-lccdt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-vvhg8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-djlm5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-lccdt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-vvhg8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-djlm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-djlm5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-lccdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-lccdt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-vvhg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-113226 -- exec busybox-fc5497c4f-vvhg8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-113226 -v=7 --alsologtostderr
E0421 18:44:47.166856   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 18:45:28.127938   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-113226 -v=7 --alsologtostderr: (47.395061471s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-113226 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp testdata/cp-test.txt ha-113226:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226:/home/docker/cp-test.txt ha-113226-m02:/home/docker/cp-test_ha-113226_ha-113226-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test_ha-113226_ha-113226-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226:/home/docker/cp-test.txt ha-113226-m03:/home/docker/cp-test_ha-113226_ha-113226-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test_ha-113226_ha-113226-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226:/home/docker/cp-test.txt ha-113226-m04:/home/docker/cp-test_ha-113226_ha-113226-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test_ha-113226_ha-113226-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp testdata/cp-test.txt ha-113226-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m02:/home/docker/cp-test.txt ha-113226:/home/docker/cp-test_ha-113226-m02_ha-113226.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test_ha-113226-m02_ha-113226.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m02:/home/docker/cp-test.txt ha-113226-m03:/home/docker/cp-test_ha-113226-m02_ha-113226-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test_ha-113226-m02_ha-113226-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m02:/home/docker/cp-test.txt ha-113226-m04:/home/docker/cp-test_ha-113226-m02_ha-113226-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test_ha-113226-m02_ha-113226-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp testdata/cp-test.txt ha-113226-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt ha-113226:/home/docker/cp-test_ha-113226-m03_ha-113226.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test_ha-113226-m03_ha-113226.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt ha-113226-m02:/home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test_ha-113226-m03_ha-113226-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m03:/home/docker/cp-test.txt ha-113226-m04:/home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test_ha-113226-m03_ha-113226-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp testdata/cp-test.txt ha-113226-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1471501082/001/cp-test_ha-113226-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt ha-113226:/home/docker/cp-test_ha-113226-m04_ha-113226.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226 "sudo cat /home/docker/cp-test_ha-113226-m04_ha-113226.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt ha-113226-m02:/home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m02 "sudo cat /home/docker/cp-test_ha-113226-m04_ha-113226-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 cp ha-113226-m04:/home/docker/cp-test.txt ha-113226-m03:/home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 ssh -n ha-113226-m03 "sudo cat /home/docker/cp-test_ha-113226-m04_ha-113226-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.493820906s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-113226 node delete m03 -v=7 --alsologtostderr: (16.750009823s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (358.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-113226 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0421 18:59:06.205186   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 19:00:29.249559   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 19:01:09.209987   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 19:04:06.204689   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-113226 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m57.24785073s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (358.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-113226 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-113226 --control-plane -v=7 --alsologtostderr: (1m11.967150478s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-113226 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-620450 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0421 19:06:09.209156   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-620450 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.29826852s)
--- PASS: TestJSONOutput/start/Command (97.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-620450 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-620450 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-620450 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-620450 --output=json --user=testUser: (7.450905298s)
--- PASS: TestJSONOutput/stop/Command (7.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-371642 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-371642 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.074239ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"696fe49f-dd47-44b4-9510-cf5039929d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-371642] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d9f1ebd-2eb3-4d3d-9843-a691d3130581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18702"}}
	{"specversion":"1.0","id":"df3ad59f-3af6-4c11-94f5-30daf6888302","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"76384e4d-b0b3-4d58-8292-2e441180666f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig"}}
	{"specversion":"1.0","id":"67bf1074-d9d0-4637-ae26-bd5254a68503","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube"}}
	{"specversion":"1.0","id":"56531be4-6ce8-4ecd-a0e6-54fa4a8fc97c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75eb915f-faf0-4416-8829-e404e7203775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d97be951-61c9-4344-b4c8-e7c6d887450f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-371642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-371642
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-114400 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-114400 --driver=kvm2  --container-runtime=crio: (46.071843138s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-117595 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-117595 --driver=kvm2  --container-runtime=crio: (48.196490468s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-114400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-117595
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-117595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-117595
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-117595: (1.014274966s)
helpers_test.go:175: Cleaning up "first-114400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-114400
--- PASS: TestMinikubeProfile (97.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-235580 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0421 19:09:06.204339   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 19:09:12.255828   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-235580 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.188370264s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-235580 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-235580 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-252583 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-252583 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.458809349s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-252583 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-252583 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-235580 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-252583 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-252583 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-252583
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-252583: (1.446420773s)
--- PASS: TestMountStart/serial/Stop (1.45s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-252583
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-252583: (22.748093481s)
--- PASS: TestMountStart/serial/RestartStopped (23.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-252583 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-252583 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860427 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0421 19:11:09.209362   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860427 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.085896736s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-860427 -- rollout status deployment/busybox: (3.663056729s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-4mp4m -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-hk7s7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-4mp4m -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-hk7s7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-4mp4m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-hk7s7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-4mp4m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-4mp4m -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-hk7s7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860427 -- exec busybox-fc5497c4f-hk7s7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-860427 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-860427 -v 3 --alsologtostderr: (42.430376539s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-860427 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp testdata/cp-test.txt multinode-860427:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2829491611/001/cp-test_multinode-860427.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427:/home/docker/cp-test.txt multinode-860427-m02:/home/docker/cp-test_multinode-860427_multinode-860427-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m02 "sudo cat /home/docker/cp-test_multinode-860427_multinode-860427-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427:/home/docker/cp-test.txt multinode-860427-m03:/home/docker/cp-test_multinode-860427_multinode-860427-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m03 "sudo cat /home/docker/cp-test_multinode-860427_multinode-860427-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp testdata/cp-test.txt multinode-860427-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2829491611/001/cp-test_multinode-860427-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt multinode-860427:/home/docker/cp-test_multinode-860427-m02_multinode-860427.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427 "sudo cat /home/docker/cp-test_multinode-860427-m02_multinode-860427.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427-m02:/home/docker/cp-test.txt multinode-860427-m03:/home/docker/cp-test_multinode-860427-m02_multinode-860427-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m03 "sudo cat /home/docker/cp-test_multinode-860427-m02_multinode-860427-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp testdata/cp-test.txt multinode-860427-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2829491611/001/cp-test_multinode-860427-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt multinode-860427:/home/docker/cp-test_multinode-860427-m03_multinode-860427.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427 "sudo cat /home/docker/cp-test_multinode-860427-m03_multinode-860427.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 cp multinode-860427-m03:/home/docker/cp-test.txt multinode-860427-m02:/home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 ssh -n multinode-860427-m02 "sudo cat /home/docker/cp-test_multinode-860427-m03_multinode-860427-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-860427 node stop m03: (1.615838462s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860427 status: exit status 7 (456.431114ms)

                                                
                                                
-- stdout --
	multinode-860427
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-860427-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-860427-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860427 status --alsologtostderr: exit status 7 (458.509088ms)

                                                
                                                
-- stdout --
	multinode-860427
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-860427-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-860427-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:13:16.844798   39675 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:13:16.844930   39675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:13:16.844939   39675 out.go:304] Setting ErrFile to fd 2...
	I0421 19:13:16.844943   39675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:13:16.845141   39675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:13:16.845336   39675 out.go:298] Setting JSON to false
	I0421 19:13:16.845367   39675 mustload.go:65] Loading cluster: multinode-860427
	I0421 19:13:16.845540   39675 notify.go:220] Checking for updates...
	I0421 19:13:16.845836   39675 config.go:182] Loaded profile config "multinode-860427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:13:16.845853   39675 status.go:255] checking status of multinode-860427 ...
	I0421 19:13:16.846364   39675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:13:16.846428   39675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:13:16.865818   39675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0421 19:13:16.866232   39675 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:13:16.866708   39675 main.go:141] libmachine: Using API Version  1
	I0421 19:13:16.866730   39675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:13:16.867004   39675 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:13:16.867208   39675 main.go:141] libmachine: (multinode-860427) Calling .GetState
	I0421 19:13:16.868807   39675 status.go:330] multinode-860427 host status = "Running" (err=<nil>)
	I0421 19:13:16.868835   39675 host.go:66] Checking if "multinode-860427" exists ...
	I0421 19:13:16.869098   39675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:13:16.869128   39675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:13:16.883751   39675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0421 19:13:16.884072   39675 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:13:16.884487   39675 main.go:141] libmachine: Using API Version  1
	I0421 19:13:16.884505   39675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:13:16.884781   39675 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:13:16.884951   39675 main.go:141] libmachine: (multinode-860427) Calling .GetIP
	I0421 19:13:16.887523   39675 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:13:16.887935   39675 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:13:16.887963   39675 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:13:16.888107   39675 host.go:66] Checking if "multinode-860427" exists ...
	I0421 19:13:16.888498   39675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:13:16.888540   39675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:13:16.903455   39675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I0421 19:13:16.903819   39675 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:13:16.904269   39675 main.go:141] libmachine: Using API Version  1
	I0421 19:13:16.904289   39675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:13:16.904636   39675 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:13:16.904813   39675 main.go:141] libmachine: (multinode-860427) Calling .DriverName
	I0421 19:13:16.905032   39675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 19:13:16.905053   39675 main.go:141] libmachine: (multinode-860427) Calling .GetSSHHostname
	I0421 19:13:16.908038   39675 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:13:16.908444   39675 main.go:141] libmachine: (multinode-860427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:71:2e", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:10:41 +0000 UTC Type:0 Mac:52:54:00:d7:71:2e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-860427 Clientid:01:52:54:00:d7:71:2e}
	I0421 19:13:16.908469   39675 main.go:141] libmachine: (multinode-860427) DBG | domain multinode-860427 has defined IP address 192.168.39.100 and MAC address 52:54:00:d7:71:2e in network mk-multinode-860427
	I0421 19:13:16.908605   39675 main.go:141] libmachine: (multinode-860427) Calling .GetSSHPort
	I0421 19:13:16.908790   39675 main.go:141] libmachine: (multinode-860427) Calling .GetSSHKeyPath
	I0421 19:13:16.908916   39675 main.go:141] libmachine: (multinode-860427) Calling .GetSSHUsername
	I0421 19:13:16.909035   39675 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427/id_rsa Username:docker}
	I0421 19:13:16.994776   39675 ssh_runner.go:195] Run: systemctl --version
	I0421 19:13:17.002195   39675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:13:17.024616   39675 kubeconfig.go:125] found "multinode-860427" server: "https://192.168.39.100:8443"
	I0421 19:13:17.024642   39675 api_server.go:166] Checking apiserver status ...
	I0421 19:13:17.024670   39675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:13:17.042399   39675 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1154/cgroup
	W0421 19:13:17.054049   39675 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1154/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 19:13:17.054131   39675 ssh_runner.go:195] Run: ls
	I0421 19:13:17.060015   39675 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0421 19:13:17.068772   39675 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0421 19:13:17.068798   39675 status.go:422] multinode-860427 apiserver status = Running (err=<nil>)
	I0421 19:13:17.068811   39675 status.go:257] multinode-860427 status: &{Name:multinode-860427 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 19:13:17.068831   39675 status.go:255] checking status of multinode-860427-m02 ...
	I0421 19:13:17.069199   39675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:13:17.069235   39675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:13:17.084400   39675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I0421 19:13:17.084807   39675 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:13:17.085198   39675 main.go:141] libmachine: Using API Version  1
	I0421 19:13:17.085226   39675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:13:17.085557   39675 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:13:17.085781   39675 main.go:141] libmachine: (multinode-860427-m02) Calling .GetState
	I0421 19:13:17.087366   39675 status.go:330] multinode-860427-m02 host status = "Running" (err=<nil>)
	I0421 19:13:17.087382   39675 host.go:66] Checking if "multinode-860427-m02" exists ...
	I0421 19:13:17.087699   39675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:13:17.087745   39675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:13:17.102176   39675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0421 19:13:17.102531   39675 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:13:17.102982   39675 main.go:141] libmachine: Using API Version  1
	I0421 19:13:17.103005   39675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:13:17.103306   39675 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:13:17.103482   39675 main.go:141] libmachine: (multinode-860427-m02) Calling .GetIP
	I0421 19:13:17.105900   39675 main.go:141] libmachine: (multinode-860427-m02) DBG | domain multinode-860427-m02 has defined MAC address 52:54:00:b4:00:d9 in network mk-multinode-860427
	I0421 19:13:17.106376   39675 main.go:141] libmachine: (multinode-860427-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:00:d9", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:11:52 +0000 UTC Type:0 Mac:52:54:00:b4:00:d9 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-860427-m02 Clientid:01:52:54:00:b4:00:d9}
	I0421 19:13:17.106407   39675 main.go:141] libmachine: (multinode-860427-m02) DBG | domain multinode-860427-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:b4:00:d9 in network mk-multinode-860427
	I0421 19:13:17.106534   39675 host.go:66] Checking if "multinode-860427-m02" exists ...
	I0421 19:13:17.106812   39675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:13:17.106845   39675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:13:17.122038   39675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0421 19:13:17.122458   39675 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:13:17.122910   39675 main.go:141] libmachine: Using API Version  1
	I0421 19:13:17.122930   39675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:13:17.123238   39675 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:13:17.123366   39675 main.go:141] libmachine: (multinode-860427-m02) Calling .DriverName
	I0421 19:13:17.123530   39675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 19:13:17.123552   39675 main.go:141] libmachine: (multinode-860427-m02) Calling .GetSSHHostname
	I0421 19:13:17.126124   39675 main.go:141] libmachine: (multinode-860427-m02) DBG | domain multinode-860427-m02 has defined MAC address 52:54:00:b4:00:d9 in network mk-multinode-860427
	I0421 19:13:17.126576   39675 main.go:141] libmachine: (multinode-860427-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:00:d9", ip: ""} in network mk-multinode-860427: {Iface:virbr1 ExpiryTime:2024-04-21 20:11:52 +0000 UTC Type:0 Mac:52:54:00:b4:00:d9 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-860427-m02 Clientid:01:52:54:00:b4:00:d9}
	I0421 19:13:17.126610   39675 main.go:141] libmachine: (multinode-860427-m02) DBG | domain multinode-860427-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:b4:00:d9 in network mk-multinode-860427
	I0421 19:13:17.126697   39675 main.go:141] libmachine: (multinode-860427-m02) Calling .GetSSHPort
	I0421 19:13:17.126856   39675 main.go:141] libmachine: (multinode-860427-m02) Calling .GetSSHKeyPath
	I0421 19:13:17.126980   39675 main.go:141] libmachine: (multinode-860427-m02) Calling .GetSSHUsername
	I0421 19:13:17.127102   39675 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18702-3854/.minikube/machines/multinode-860427-m02/id_rsa Username:docker}
	I0421 19:13:17.210244   39675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:13:17.226687   39675 status.go:257] multinode-860427-m02 status: &{Name:multinode-860427-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0421 19:13:17.226723   39675 status.go:255] checking status of multinode-860427-m03 ...
	I0421 19:13:17.227029   39675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0421 19:13:17.227065   39675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0421 19:13:17.241990   39675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I0421 19:13:17.242451   39675 main.go:141] libmachine: () Calling .GetVersion
	I0421 19:13:17.242939   39675 main.go:141] libmachine: Using API Version  1
	I0421 19:13:17.242962   39675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0421 19:13:17.243242   39675 main.go:141] libmachine: () Calling .GetMachineName
	I0421 19:13:17.243433   39675 main.go:141] libmachine: (multinode-860427-m03) Calling .GetState
	I0421 19:13:17.245037   39675 status.go:330] multinode-860427-m03 host status = "Stopped" (err=<nil>)
	I0421 19:13:17.245052   39675 status.go:343] host is not running, skipping remaining checks
	I0421 19:13:17.245058   39675 status.go:257] multinode-860427-m03 status: &{Name:multinode-860427-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.53s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-860427 node start m03 -v=7 --alsologtostderr: (30.009029062s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-860427 node delete m03: (1.913950049s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (196.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860427 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0421 19:24:06.204579   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860427 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m16.135681337s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860427 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (196.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-860427
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860427-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-860427-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (71.955128ms)

                                                
                                                
-- stdout --
	* [multinode-860427-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-860427-m02' is duplicated with machine name 'multinode-860427-m02' in profile 'multinode-860427'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860427-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860427-m03 --driver=kvm2  --container-runtime=crio: (47.959572835s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-860427
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-860427: exit status 80 (235.925247ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-860427 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-860427-m03 already exists in multinode-860427-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-860427-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-860427-m03: (1.004075992s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.32s)

                                                
                                    
x
+
TestScheduledStopUnix (118.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-255761 --memory=2048 --driver=kvm2  --container-runtime=crio
E0421 19:31:09.208508   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-255761 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.315177694s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-255761 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-255761 -n scheduled-stop-255761
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-255761 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-255761 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-255761 -n scheduled-stop-255761
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-255761
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-255761 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-255761
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-255761: exit status 7 (75.742778ms)

                                                
                                                
-- stdout --
	scheduled-stop-255761
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-255761 -n scheduled-stop-255761
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-255761 -n scheduled-stop-255761: exit status 7 (72.443137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-255761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-255761
--- PASS: TestScheduledStopUnix (118.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (159.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1413291065 start -p running-upgrade-843869 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0421 19:36:09.208309   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1413291065 start -p running-upgrade-843869 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m13.298630723s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-843869 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-843869 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.460343756s)
helpers_test.go:175: Cleaning up "running-upgrade-843869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-843869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-843869: (1.167833484s)
--- PASS: TestRunningBinaryUpgrade (159.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-893211 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-893211 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (105.150619ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-893211] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (126.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-893211 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-893211 --driver=kvm2  --container-runtime=crio: (2m5.830252364s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-893211 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (126.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-893211 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-893211 --no-kubernetes --driver=kvm2  --container-runtime=crio: (9.505106218s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-893211 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-893211 status -o json: exit status 2 (268.404742ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-893211","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-893211
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-893211: (1.058815708s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-893211 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-893211 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.48433652s)
--- PASS: TestNoKubernetes/serial/Start (29.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (145.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1945072910 start -p stopped-upgrade-322507 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1945072910 start -p stopped-upgrade-322507 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m31.578552011s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1945072910 -p stopped-upgrade-322507 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1945072910 -p stopped-upgrade-322507 stop: (2.140046058s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-322507 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-322507 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.01821787s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (145.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-893211 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-893211 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.577881ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-893211
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-893211: (1.488154368s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-893211 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-893211 --driver=kvm2  --container-runtime=crio: (23.353537361s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-893211 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-893211 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.068357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-322507
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestPause/serial/Start (67.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-321307 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-321307 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m7.955028637s)
--- PASS: TestPause/serial/Start (67.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-474762 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-474762 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (116.735589ms)

                                                
                                                
-- stdout --
	* [false-474762] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0421 19:37:46.536768   52156 out.go:291] Setting OutFile to fd 1 ...
	I0421 19:37:46.536911   52156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:37:46.536920   52156 out.go:304] Setting ErrFile to fd 2...
	I0421 19:37:46.536924   52156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:37:46.537180   52156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18702-3854/.minikube/bin
	I0421 19:37:46.537788   52156 out.go:298] Setting JSON to false
	I0421 19:37:46.538745   52156 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4765,"bootTime":1713723502,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0421 19:37:46.538810   52156 start.go:139] virtualization: kvm guest
	I0421 19:37:46.541092   52156 out.go:177] * [false-474762] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0421 19:37:46.542479   52156 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:37:46.543850   52156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:37:46.542538   52156 notify.go:220] Checking for updates...
	I0421 19:37:46.546526   52156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18702-3854/kubeconfig
	I0421 19:37:46.547907   52156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18702-3854/.minikube
	I0421 19:37:46.549257   52156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0421 19:37:46.551223   52156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:37:46.552966   52156 config.go:182] Loaded profile config "kubernetes-upgrade-595552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0421 19:37:46.553054   52156 config.go:182] Loaded profile config "pause-321307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0421 19:37:46.553141   52156 config.go:182] Loaded profile config "running-upgrade-843869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0421 19:37:46.553233   52156 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:37:46.591607   52156 out.go:177] * Using the kvm2 driver based on user configuration
	I0421 19:37:46.593141   52156 start.go:297] selected driver: kvm2
	I0421 19:37:46.593170   52156 start.go:901] validating driver "kvm2" against <nil>
	I0421 19:37:46.593182   52156 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:37:46.595349   52156 out.go:177] 
	W0421 19:37:46.596777   52156 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0421 19:37:46.598143   52156 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-474762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-474762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Apr 2024 19:37:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.39.59:8443
name: running-upgrade-843869
contexts:
- context:
cluster: running-upgrade-843869
user: running-upgrade-843869
name: running-upgrade-843869
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-843869
user:
client-certificate: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/running-upgrade-843869/client.crt
client-key: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/running-upgrade-843869/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-474762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474762"

                                                
                                                
----------------------- debugLogs end: false-474762 [took: 2.944944674s] --------------------------------
helpers_test.go:175: Cleaning up "false-474762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-474762
--- PASS: TestNetworkPlugins/group/false (3.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-321307 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-321307 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.650099339s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (154.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-597568 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-597568 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (2m34.811830187s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (154.81s)

                                                
                                    
x
+
TestPause/serial/Pause (1.13s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-321307 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-321307 --alsologtostderr -v=5: (1.134684441s)
--- PASS: TestPause/serial/Pause (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-321307 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-321307 --output=json --layout=cluster: exit status 2 (325.38146ms)

                                                
                                                
-- stdout --
	{"Name":"pause-321307","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-321307","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-321307 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-321307 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-321307 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.03s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.033013959s)
--- PASS: TestPause/serial/VerifyDeletedResources (2.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-167454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-167454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m34.792682422s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2be35d49-bd89-4dbd-b6e4-fce5e5730eeb] Pending
helpers_test.go:344: "busybox" [2be35d49-bd89-4dbd-b6e4-fce5e5730eeb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2be35d49-bd89-4dbd-b6e4-fce5e5730eeb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.007623463s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-167454 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-167454 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008965204s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-167454 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-597568 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4ae41880-de90-42f8-9bb3-9e609d6b3fbc] Pending
helpers_test.go:344: "busybox" [4ae41880-de90-42f8-9bb3-9e609d6b3fbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4ae41880-de90-42f8-9bb3-9e609d6b3fbc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004481737s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-597568 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-597568 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-597568 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (685.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-167454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-167454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (11m25.3879306s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-167454 -n default-k8s-diff-port-167454
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (685.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (602.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-597568 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-597568 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m2.697327529s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-597568 -n no-preload-597568
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (602.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-867585 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-867585 --alsologtostderr -v=3: (2.299316533s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-867585 -n old-k8s-version-867585: exit status 7 (86.832127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-867585 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (171.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-364614 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0421 19:49:06.205368   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-364614 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (2m51.268174223s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (171.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-364614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-364614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.395004379s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-364614 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-364614 --alsologtostderr -v=3: (7.386755029s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-364614 -n newest-cni-364614
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-364614 -n newest-cni-364614: exit status 7 (75.476364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-364614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-364614 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0421 19:50:29.252113   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-364614 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (40.109355758s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-364614 -n newest-cni-364614
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-364614 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-364614 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-364614 -n newest-cni-364614
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-364614 -n newest-cni-364614: exit status 2 (252.039373ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-364614 -n newest-cni-364614
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-364614 -n newest-cni-364614: exit status 2 (259.864544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-364614 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-364614 -n newest-cni-364614
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-364614 -n newest-cni-364614
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-727235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0421 19:51:09.207818   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-727235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m2.316334158s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-727235 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7e11c23e-ccc7-42ea-bcf6-b92fd2a70e92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7e11c23e-ccc7-42ea-bcf6-b92fd2a70e92] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00477325s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-727235 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-727235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-727235 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (629.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-727235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-727235 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m29.599955362s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-727235 -n embed-certs-727235
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (629.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0421 20:09:06.204326   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m3.196286839s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m38.130708558s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.58465887s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-474762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-474762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hztjz" [74097a2e-b781-4ed6-9899-ffd586b120fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hztjz" [74097a2e-b781-4ed6-9899-ffd586b120fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.006396936s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-474762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m28.300548119s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r2lz7" [8f56000e-1e75-4050-b553-e92f588117b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006195153s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-474762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-474762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rqxmj" [a0ee2c11-1449-48fa-998f-7df225fdb4da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rqxmj" [a0ee2c11-1449-48fa-998f-7df225fdb4da] Running
E0421 20:11:09.208429   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005073225s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-474762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9q9vj" [e7c49ea5-6a4f-4762-a431-3f7d59daaab0] Running
E0421 20:11:26.682150   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/default-k8s-diff-port-167454/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007163598s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-474762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-474762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gw6z8" [364bd6fb-d04b-4381-bbb2-7765d3f36460] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0421 20:11:36.922750   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/default-k8s-diff-port-167454/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-gw6z8" [364bd6fb-d04b-4381-bbb2-7765d3f36460] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004742568s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (102.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m42.405003208s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (102.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-474762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-474762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-474762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k9b5m" [609f1e66-ecf6-4ae6-8ceb-a41e0b562973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0421 20:11:54.795957   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/no-preload-597568/client.crt: no such file or directory
E0421 20:11:57.403331   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/default-k8s-diff-port-167454/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-k9b5m" [609f1e66-ecf6-4ae6-8ceb-a41e0b562973] Running
E0421 20:11:59.917068   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/no-preload-597568/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004676708s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (94.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m34.209276196s)
--- PASS: TestNetworkPlugins/group/flannel/Start (94.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-474762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (113.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0421 20:12:28.843702   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:28.848948   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:28.859258   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:28.879482   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:28.919808   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:29.000124   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:29.160875   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:29.481662   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:30.122283   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:30.638971   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/no-preload-597568/client.crt: no such file or directory
E0421 20:12:31.402926   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:33.963284   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:38.364149   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/default-k8s-diff-port-167454/client.crt: no such file or directory
E0421 20:12:39.084053   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:12:49.325163   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:13:09.806292   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:13:11.600052   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/no-preload-597568/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-474762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m53.335413272s)
--- PASS: TestNetworkPlugins/group/bridge/Start (113.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-474762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-474762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w79fk" [c6646c42-e28b-412b-9ebb-6828df12559a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-w79fk" [c6646c42-e28b-412b-9ebb-6828df12559a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004564574s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-474762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-smgf7" [96cbe5ba-0269-4c97-9360-04fc85b5ddbe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004659341s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-474762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-474762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-48x8f" [7f682d76-f8af-4b62-9523-498030aec722] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-48x8f" [7f682d76-f8af-4b62-9523-498030aec722] Running
E0421 20:13:50.767218   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004054643s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-474762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-474762 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-474762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-74bs4" [c408953c-4f8a-4d82-9626-224fcdf5d18c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-74bs4" [c408953c-4f8a-4d82-9626-224fcdf5d18c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.006090067s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-474762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-474762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0421 20:14:52.905393   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:52.910679   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:52.920950   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:52.941276   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:52.981881   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:53.062239   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:53.222908   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:53.543289   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:54.184348   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:55.465312   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:14:58.026158   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:15:03.147048   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:15:12.687790   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:15:13.387301   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:15:33.867608   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:15:52.258600   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 20:15:56.278622   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:56.283874   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:56.294120   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:56.314408   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:56.354742   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:56.435178   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:56.595690   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:56.916391   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:57.557343   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:15:58.837842   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:16:01.398952   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:16:06.519804   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:16:09.207745   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/addons-337450/client.crt: no such file or directory
E0421 20:16:14.828703   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:16:16.441480   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/default-k8s-diff-port-167454/client.crt: no such file or directory
E0421 20:16:16.760980   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:16:25.589480   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:25.594788   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:25.605042   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:25.625380   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:25.665728   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:25.746131   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:25.906564   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:26.227324   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:26.867726   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:28.148319   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:30.708735   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:35.829813   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:37.241825   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:16:44.124950   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/default-k8s-diff-port-167454/client.crt: no such file or directory
E0421 20:16:46.070475   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:16:49.675602   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/no-preload-597568/client.crt: no such file or directory
E0421 20:16:52.954585   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:52.959856   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:52.970159   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:52.990454   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:53.030815   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:53.111148   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:53.271833   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:53.592475   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:54.233224   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:55.514188   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:16:58.074369   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:17:03.195539   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:17:06.550907   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:17:13.436337   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:17:17.360627   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/no-preload-597568/client.crt: no such file or directory
E0421 20:17:18.202864   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:17:28.843921   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:17:33.917076   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:17:36.748879   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:17:47.511184   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:17:56.528753   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/old-k8s-version-867585/client.crt: no such file or directory
E0421 20:18:14.877555   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:18:14.987874   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:14.993139   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:15.003391   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:15.023663   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:15.064093   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:15.145162   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:15.305546   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:15.626548   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:16.267469   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:17.548581   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:20.108790   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:25.229956   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:35.470947   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:36.786728   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:36.791993   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:36.802250   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:36.822490   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:36.862740   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:36.943083   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:37.103494   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:37.423693   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:38.064705   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:39.345368   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:40.123444   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:18:41.905615   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:47.026596   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:18:55.951117   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:18:57.266864   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:19:06.205359   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/functional-977002/client.crt: no such file or directory
E0421 20:19:09.432084   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/calico-474762/client.crt: no such file or directory
E0421 20:19:15.031552   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:15.036852   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:15.047174   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:15.067422   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:15.107780   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:15.188090   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:15.348505   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:15.669456   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:16.310094   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:17.590775   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:17.747014   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:19:20.151657   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:25.272095   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:35.513227   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:36.798362   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/custom-flannel-474762/client.crt: no such file or directory
E0421 20:19:36.911639   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory
E0421 20:19:52.905089   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:19:55.994116   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:19:58.708147   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/flannel-474762/client.crt: no such file or directory
E0421 20:20:20.589209   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/auto-474762/client.crt: no such file or directory
E0421 20:20:36.954313   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/bridge-474762/client.crt: no such file or directory
E0421 20:20:56.278566   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/kindnet-474762/client.crt: no such file or directory
E0421 20:20:58.831863   11175 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/enable-default-cni-474762/client.crt: no such file or directory

                                                
                                    

Test skip (36/317)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
147 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.15
273 TestNetworkPlugins/group/kubenet 3.25
281 TestNetworkPlugins/group/cilium 3.39
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-411651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-411651
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-474762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-474762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Apr 2024 19:37:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.39.59:8443
name: running-upgrade-843869
contexts:
- context:
cluster: running-upgrade-843869
user: running-upgrade-843869
name: running-upgrade-843869
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-843869
user:
client-certificate: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/running-upgrade-843869/client.crt
client-key: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/running-upgrade-843869/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-474762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474762"

                                                
                                                
----------------------- debugLogs end: kubenet-474762 [took: 3.111074043s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-474762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-474762
--- SKIP: TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-474762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-474762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18702-3854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Apr 2024 19:37:26 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.39.59:8443
name: running-upgrade-843869
contexts:
- context:
cluster: running-upgrade-843869
user: running-upgrade-843869
name: running-upgrade-843869
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-843869
user:
client-certificate: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/running-upgrade-843869/client.crt
client-key: /home/jenkins/minikube-integration/18702-3854/.minikube/profiles/running-upgrade-843869/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-474762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-474762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474762"

                                                
                                                
----------------------- debugLogs end: cilium-474762 [took: 3.253280968s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-474762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-474762
--- SKIP: TestNetworkPlugins/group/cilium (3.39s)

                                                
                                    
Copied to clipboard